title
stringlengths 3
49
| text
stringlengths 509
117k
| relevans
float64 0.76
0.83
| popularity
float64 0.95
1
| ranking
float64 0.76
0.82
|
---|---|---|---|---|
Theoretical chemistry | Theoretical chemistry is the branch of chemistry which develops theoretical generalizations that are part of the theoretical arsenal of modern chemistry: for example, the concepts of chemical bonding, chemical reaction, valence, the surface of potential energy, molecular orbitals, orbital interactions, and molecule activation.
Overview
Theoretical chemistry unites principles and concepts common to all branches of chemistry. Within the framework of theoretical chemistry, there is a systematization of chemical laws, principles and rules, their refinement and detailing, the construction of a hierarchy. The central place in theoretical chemistry is occupied by the doctrine of the interconnection of the structure and properties of molecular systems. It uses mathematical and physical methods to explain the structures and dynamics of chemical systems and to correlate, understand, and predict their thermodynamic and kinetic properties. In the most general sense, it is explanation of chemical phenomena by methods of theoretical physics. In contrast to theoretical physics, in connection with the high complexity of chemical systems, theoretical chemistry, in addition to approximate mathematical methods, often uses semi-empirical and empirical methods.
In recent years, it has consisted primarily of quantum chemistry, i.e., the application of quantum mechanics to problems in chemistry. Other major components include molecular dynamics, statistical thermodynamics and theories of electrolyte solutions, reaction networks, polymerization, catalysis, molecular magnetism and spectroscopy.
Modern theoretical chemistry may be roughly divided into the study of chemical structure and the study of chemical dynamics. The former includes studies of: electronic structure, potential energy surfaces, and force fields; vibrational-rotational motion; equilibrium properties of condensed-phase systems and macro-molecules. Chemical dynamics includes: bimolecular kinetics and the collision theory of reactions and energy transfer; unimolecular rate theory and metastable states; condensed-phase and macromolecular aspects of dynamics.
Branches of theoretical chemistry
Quantum chemistry The application of quantum mechanics or fundamental interactions to chemical and physico-chemical problems. Spectroscopic and magnetic properties are between the most frequently modelled.
Computational chemistryThe application of scientific computing to chemistry, involving approximation schemes such as Hartree–Fock, post-Hartree–Fock, density functional theory, semiempirical methods (such as PM3) or force field methods. Molecular shape is the most frequently predicted property. Computers can also predict vibrational spectra and vibronic coupling, but also acquire and Fourier transform Infra-red Data into frequency information. The comparison with predicted vibrations supports the predicted shape.
Molecular modelling Methods for modelling molecular structures without necessarily referring to quantum mechanics. Examples are molecular docking, protein-protein docking, drug design, combinatorial chemistry. The fitting of shape and electric potential are the driving factor in this graphical approach.
Molecular dynamics Application of classical mechanics for simulating the movement of the nuclei of an assembly of atoms and molecules. The rearrangement of molecules within an ensemble is controlled by Van der Waals forces and promoted by temperature.
Molecular mechanics Modeling of the intra- and inter-molecular interaction potential energy surfaces via potentials. The latter are usually parameterized from ab initio calculations.
Mathematical chemistry Discussion and prediction of the molecular structure using mathematical methods without necessarily referring to quantum mechanics. Topology is a branch of mathematics that allows researchers to predict properties of flexible finite size bodies like clusters.
Chemical kinetics Theoretical study of the dynamical systems associated to reactive chemicals, the activated complex and their corresponding differential equations.
Cheminformatics (also known as chemoinformatics) The use of computer and informational techniques, applied to crop information to solve problems in the field of chemistry.
Chemical engineering The application of chemistry to industrial processes to conduct research and development. This allows for development and improvement of new and existing products and manufacturing processes.
Chemical thermodynamics The study of the relationship between heat, work, and energy in chemical reactions and processes, with focus on entropy, enthalpy, and Gibbs free energy to understand reaction spontaneity and equilibrium.
Statistical mechanics The application of statistical mechanics to predict and explain thermodynamic properties of chemical systems, connecting molecular behavior with macroscopic properties.
Closely related disciplines
Historically, the major field of application of theoretical chemistry has been in the following fields of research:
Atomic physics: The discipline dealing with electrons and atomic nuclei.
Molecular physics: The discipline of the electrons surrounding the molecular nuclei and of movement of the nuclei. This term usually refers to the study of molecules made of a few atoms in the gas phase. But some consider that molecular physics is also the study of bulk properties of chemicals in terms of molecules.
Physical chemistry and chemical physics: Chemistry investigated via physical methods like laser techniques, scanning tunneling microscope, etc. The formal distinction between both fields is that physical chemistry is a branch of chemistry while chemical physics is a branch of physics. In practice this distinction is quite vague.
Many-body theory: The discipline studying the effects which appear in systems with large number of constituents. It is based on quantum physics – mostly second quantization formalism – and quantum electrodynamics.
Hence, theoretical chemistry has emerged as a branch of research. With the rise of the density functional theory and other methods like molecular mechanics, the range of application has been extended to chemical systems which are relevant to other fields of chemistry and physics, including biochemistry, condensed matter physics, nanotechnology or molecular biology.
See also
List of unsolved problems in chemistry
Bibliography
Attila Szabo and Neil S. Ostlund, Modern Quantum Chemistry: Introduction to Advanced Electronic Structure Theory, Dover Publications; New Ed edition (1996) ,
Robert G. Parr and Weitao Yang, Density-Functional Theory of Atoms and Molecules, Oxford Science Publications; first published in 1989; ,
D. J. Tannor, V. Kazakov and V. Orlov, Control of Photochemical Branching: Novel Procedures for Finding Optimal Pulses and Global Upper Bounds, in Time Dependent Quantum Molecular Dynamics, J. Broeckhove and L. Lathouwers, eds., 347-360 (Plenum, 1992)
Chemistry
Physical chemistry
Chemical physics | 0.831725 | 0.983287 | 0.817825 |
Physical chemistry | Physical chemistry is the study of macroscopic and microscopic phenomena in chemical systems in terms of the principles, practices, and concepts of physics such as motion, energy, force, time, thermodynamics, quantum chemistry, statistical mechanics, analytical dynamics and chemical equilibria.
Physical chemistry, in contrast to chemical physics, is predominantly (but not always) a supra-molecular science, as the majority of the principles on which it was founded relate to the bulk rather than the molecular or atomic structure alone (for example, chemical equilibrium and colloids).
Some of the relationships that physical chemistry strives to understand include the effects of:
Intermolecular forces that act upon the physical properties of materials (plasticity, tensile strength, surface tension in liquids).
Reaction kinetics on the rate of a reaction.
The identity of ions and the electrical conductivity of materials.
Surface science and electrochemistry of cell membranes.
Interaction of one body with another in terms of quantities of heat and work called thermodynamics.
Transfer of heat between a chemical system and its surroundings during change of phase or chemical reaction taking place called thermochemistry
Study of colligative properties of number of species present in solution.
Number of phases, number of components and degree of freedom (or variance) can be correlated with one another with help of phase rule.
Reactions of electrochemical cells.
Behaviour of microscopic systems using quantum mechanics and macroscopic systems using statistical thermodynamics.
Calculation of the energy of electron movement in molecules and metal complexes.
Key concepts
The key concepts of physical chemistry are the ways in which pure physics is applied to chemical problems.
One of the key concepts in classical chemistry is that all chemical compounds can be described as groups of atoms bonded together and chemical reactions can be described as the making and breaking of those bonds. Predicting the properties of chemical compounds from a description of atoms and how they bond is one of the major goals of physical chemistry. To describe the atoms and bonds precisely, it is necessary to know both where the nuclei of the atoms are, and how electrons are distributed around them.
Disciplines
Quantum chemistry, a subfield of physical chemistry especially concerned with the application of quantum mechanics to chemical problems, provides tools to determine how strong and what shape bonds are, how nuclei move, and how light can be absorbed or emitted by a chemical compound. Spectroscopy is the related sub-discipline of physical chemistry which is specifically concerned with the interaction of electromagnetic radiation with matter.
Another set of important questions in chemistry concerns what kind of reactions can happen spontaneously and which properties are possible for a given chemical mixture. This is studied in chemical thermodynamics, which sets limits on quantities like how far a reaction can proceed, or how much energy can be converted into work in an internal combustion engine, and which provides links between properties like the thermal expansion coefficient and rate of change of entropy with pressure for a gas or a liquid. It can frequently be used to assess whether a reactor or engine design is feasible, or to check the validity of experimental data. To a limited extent, quasi-equilibrium and non-equilibrium thermodynamics can describe irreversible changes. However, classical thermodynamics is mostly concerned with systems in equilibrium and reversible changes and not what actually does happen, or how fast, away from equilibrium.
Which reactions do occur and how fast is the subject of chemical kinetics, another branch of physical chemistry. A key idea in chemical kinetics is that for reactants to react and form products, most chemical species must go through transition states which are higher in energy than either the reactants or the products and serve as a barrier to reaction. In general, the higher the barrier, the slower the reaction. A second is that most chemical reactions occur as a sequence of elementary reactions, each with its own transition state. Key questions in kinetics include how the rate of reaction depends on temperature and on the concentrations of reactants and catalysts in the reaction mixture, as well as how catalysts and reaction conditions can be engineered to optimize the reaction rate.
The fact that how fast reactions occur can often be specified with just a few concentrations and a temperature, instead of needing to know all the positions and speeds of every molecule in a mixture, is a special case of another key concept in physical chemistry, which is that to the extent an engineer needs to know, everything going on in a mixture of very large numbers (perhaps of the order of the Avogadro constant, 6 x 1023) of particles can often be described by just a few variables like pressure, temperature, and concentration. The precise reasons for this are described in statistical mechanics, a specialty within physical chemistry which is also shared with physics. Statistical mechanics also provides ways to predict the properties we see in everyday life from molecular properties without relying on empirical correlations based on chemical similarities.
History
The term "physical chemistry" was coined by Mikhail Lomonosov in 1752, when he presented a lecture course entitled "A Course in True Physical Chemistry" before the students of Petersburg University. In the preamble to these lectures he gives the definition: "Physical chemistry is the science that must explain under provisions of physical experiments the reason for what is happening in complex bodies through chemical operations".
Modern physical chemistry originated in the 1860s to 1880s with work on chemical thermodynamics, electrolytes in solutions, chemical kinetics and other subjects. One milestone was the publication in 1876 by Josiah Willard Gibbs of his paper, On the Equilibrium of Heterogeneous Substances. This paper introduced several of the cornerstones of physical chemistry, such as Gibbs energy, chemical potentials, and Gibbs' phase rule.
The first scientific journal specifically in the field of physical chemistry was the German journal, Zeitschrift für Physikalische Chemie, founded in 1887 by Wilhelm Ostwald and Jacobus Henricus van 't Hoff. Together with Svante August Arrhenius, these were the leading figures in physical chemistry in the late 19th century and early 20th century. All three were awarded the Nobel Prize in Chemistry between 1901 and 1909.
Developments in the following decades include the application of statistical mechanics to chemical systems and work on colloids and surface chemistry, where Irving Langmuir made many contributions. Another important step was the development of quantum mechanics into quantum chemistry from the 1930s, where Linus Pauling was one of the leading names. Theoretical developments have gone hand in hand with developments in experimental methods, where the use of different forms of spectroscopy, such as infrared spectroscopy, microwave spectroscopy, electron paramagnetic resonance and nuclear magnetic resonance spectroscopy, is probably the most important 20th century development.
Further development in physical chemistry may be attributed to discoveries in nuclear chemistry, especially in isotope separation (before and during World War II), more recent discoveries in astrochemistry, as well as the development of calculation algorithms in the field of "additive physicochemical properties" (practically all physicochemical properties, such as boiling point, critical point, surface tension, vapor pressure, etc.—more than 20 in all—can be precisely calculated from chemical structure alone, even if the chemical molecule remains unsynthesized), and herein lies the practical importance of contemporary physical chemistry.
See Group contribution method, Lydersen method, Joback method, Benson group increment theory, quantitative structure–activity relationship
Journals
Some journals that deal with physical chemistry include
Zeitschrift für Physikalische Chemie (1887)
Journal of Physical Chemistry A (from 1896 as Journal of Physical Chemistry, renamed in 1997)
Physical Chemistry Chemical Physics (from 1999, formerly Faraday Transactions with a history dating back to 1905)
Macromolecular Chemistry and Physics (1947)
Annual Review of Physical Chemistry (1950)
Molecular Physics (1957)
Journal of Physical Organic Chemistry (1988)
Journal of Physical Chemistry B (1997)
ChemPhysChem (2000)
Journal of Physical Chemistry C (2007)
Journal of Physical Chemistry Letters (from 2010, combined letters previously published in the separate journals)
Historical journals that covered both chemistry and physics include Annales de chimie et de physique (started in 1789, published under the name given here from 1815 to 1914).
Branches and related topics
Chemical thermodynamics
Chemical kinetics
Statistical mechanics
Quantum chemistry
Electrochemistry
Photochemistry
Surface chemistry
Solid-state chemistry
Spectroscopy
Biophysical chemistry
Materials science
Physical organic chemistry
Micromeritics
See also
List of important publications in chemistry#Physical chemistry
List of unsolved problems in chemistry#Physical chemistry problems
Physical biochemistry
:Category:Physical chemists
References
External links
The World of Physical Chemistry (Keith J. Laidler, 1993)
Physical Chemistry from Ostwald to Pauling (John W. Servos, 1996)
Physical Chemistry: neither Fish nor Fowl? (Joachim Schummer, The Autonomy of Chemistry, Würzburg, Königshausen & Neumann, 1998, pp. 135–148)
The Cambridge History of Science: The modern physical and mathematical sciences (Mary Jo Nye, 2003) | 0.817838 | 0.996066 | 0.814621 |
Chemical reaction | A chemical reaction is a process that leads to the chemical transformation of one set of chemical substances to another. When chemical reactions occur, the atoms are rearranged and the reaction is accompanied by an energy change as new products are generated. Classically, chemical reactions encompass changes that only involve the positions of electrons in the forming and breaking of chemical bonds between atoms, with no change to the nuclei (no change to the elements present), and can often be described by a chemical equation. Nuclear chemistry is a sub-discipline of chemistry that involves the chemical reactions of unstable and radioactive elements where both electronic and nuclear changes can occur.
The substance (or substances) initially involved in a chemical reaction are called reactants or reagents. Chemical reactions are usually characterized by a chemical change, and they yield one or more products, which usually have properties different from the reactants. Reactions often consist of a sequence of individual sub-steps, the so-called elementary reactions, and the information on the precise course of action is part of the reaction mechanism. Chemical reactions are described with chemical equations, which symbolically present the starting materials, end products, and sometimes intermediate products and reaction conditions.
Chemical reactions happen at a characteristic reaction rate at a given temperature and chemical concentration. Some reactions produce heat and are called exothermic reactions, while others may require heat to enable the reaction to occur, which are called endothermic reactions. Typically, reaction rates increase with increasing temperature because there is more thermal energy available to reach the activation energy necessary for breaking bonds between atoms.
A reaction may be classified as redox in which oxidation and reduction occur or non-redox in which there is no oxidation and reduction occurring. Most simple redox reactions may be classified as a combination, decomposition, or single displacement reaction.
Different chemical reactions are used during chemical synthesis in order to obtain the desired product. In biochemistry, a consecutive series of chemical reactions (where the product of one reaction is the reactant of the next reaction) form metabolic pathways. These reactions are often catalyzed by protein enzymes. Enzymes increase the rates of biochemical reactions, so that metabolic syntheses and decompositions impossible under ordinary conditions can occur at the temperature and concentrations present within a cell.
The general concept of a chemical reaction has been extended to reactions between entities smaller than atoms, including nuclear reactions, radioactive decays and reactions between elementary particles, as described by quantum field theory.
History
Chemical reactions such as combustion in fire, fermentation and the reduction of ores to metals were known since antiquity. Initial theories of transformation of materials were developed by Greek philosophers, such as the Four-Element Theory of Empedocles stating that any substance is composed of the four basic elements – fire, water, air and earth. In the Middle Ages, chemical transformations were studied by alchemists. They attempted, in particular, to convert lead into gold, for which purpose they used reactions of lead and lead-copper alloys with sulfur.
The artificial production of chemical substances already was a central goal for medieval alchemists. Examples include the synthesis of ammonium chloride from organic substances as described in the works (c. 850–950) attributed to Jābir ibn Ḥayyān, or the production of mineral acids such as sulfuric and nitric acids by later alchemists, starting from c. 1300. The production of mineral acids involved the heating of sulfate and nitrate minerals such as copper sulfate, alum and saltpeter. In the 17th century, Johann Rudolph Glauber produced hydrochloric acid and sodium sulfate by reacting sulfuric acid and sodium chloride. With the development of the lead chamber process in 1746 and the Leblanc process, allowing large-scale production of sulfuric acid and sodium carbonate, respectively, chemical reactions became implemented into the industry. Further optimization of sulfuric acid technology resulted in the contact process in the 1880s, and the Haber process was developed in 1909–1910 for ammonia synthesis.
From the 16th century, researchers including Jan Baptist van Helmont, Robert Boyle, and Isaac Newton tried to establish theories of experimentally observed chemical transformations. The phlogiston theory was proposed in 1667 by Johann Joachim Becher. It postulated the existence of a fire-like element called "phlogiston", which was contained within combustible bodies and released during combustion. This proved to be false in 1785 by Antoine Lavoisier who found the correct explanation of the combustion as a reaction with oxygen from the air.
Joseph Louis Gay-Lussac recognized in 1808 that gases always react in a certain relationship with each other. Based on this idea and the atomic theory of John Dalton, Joseph Proust had developed the law of definite proportions, which later resulted in the concepts of stoichiometry and chemical equations.
Regarding the organic chemistry, it was long believed that compounds obtained from living organisms were too complex to be obtained synthetically. According to the concept of vitalism, organic matter was endowed with a "vital force" and distinguished from inorganic materials. This separation was ended however by the synthesis of urea from inorganic precursors by Friedrich Wöhler in 1828. Other chemists who brought major contributions to organic chemistry include Alexander William Williamson with his synthesis of ethers and Christopher Kelk Ingold, who, among many discoveries, established the mechanisms of substitution reactions.
Characteristics
The general characteristics of chemical reactions are:
Evolution of a gas
Formation of a precipitate
Change in temperature
Change in state
Equations
Chemical equations are used to graphically illustrate chemical reactions. They consist of chemical or structural formulas of the reactants on the left and those of the products on the right. They are separated by an arrow (→) which indicates the direction and type of the reaction; the arrow is read as the word "yields". The tip of the arrow points in the direction in which the reaction proceeds. A double arrow pointing in opposite directions is used for equilibrium reactions. Equations should be balanced according to the stoichiometry, the number of atoms of each species should be the same on both sides of the equation. This is achieved by scaling the number of involved molecules (A, B, C and D in a schematic example below) by the appropriate integers a, b, c and d.
More elaborate reactions are represented by reaction schemes, which in addition to starting materials and products show important intermediates or transition states. Also, some relatively minor additions to the reaction can be indicated above the reaction arrow; examples of such additions are water, heat, illumination, a catalyst, etc. Similarly, some minor products can be placed below the arrow, often with a minus sign.
Retrosynthetic analysis can be applied to design a complex synthesis reaction. Here the analysis starts from the products, for example by splitting selected chemical bonds, to arrive at plausible initial reagents. A special arrow (⇒) is used in retro reactions.
Elementary reactions
The elementary reaction is the smallest division into which a chemical reaction can be decomposed, it has no intermediate products. Most experimentally observed reactions are built up from many elementary reactions that occur in parallel or sequentially. The actual sequence of the individual elementary reactions is known as reaction mechanism. An elementary reaction involves a few molecules, usually one or two, because of the low probability for several molecules to meet at a certain time.
The most important elementary reactions are unimolecular and bimolecular reactions. Only one molecule is involved in a unimolecular reaction; it is transformed by isomerization or a dissociation into one or more other molecules. Such reactions require the addition of energy in the form of heat or light. A typical example of a unimolecular reaction is the cis–trans isomerization, in which the cis-form of a compound converts to the trans-form or vice versa.
In a typical dissociation reaction, a bond in a molecule splits (ruptures) resulting in two molecular fragments. The splitting can be homolytic or heterolytic. In the first case, the bond is divided so that each product retains an electron and becomes a neutral radical. In the second case, both electrons of the chemical bond remain with one of the products, resulting in charged ions. Dissociation plays an important role in triggering chain reactions, such as hydrogen–oxygen or polymerization reactions.
AB -> A + B
Dissociation of a molecule AB into fragments A and B
For bimolecular reactions, two molecules collide and react with each other. Their merger is called chemical synthesis or an addition reaction.
A + B -> AB
Another possibility is that only a portion of one molecule is transferred to the other molecule. This type of reaction occurs, for example, in redox and acid-base reactions. In redox reactions, the transferred particle is an electron, whereas in acid-base reactions it is a proton. This type of reaction is also called metathesis.
HA + B -> A + HB
for example
NaCl + AgNO3 -> NaNO3 + AgCl(v)
Chemical equilibrium
Most chemical reactions are reversible; that is, they can and do run in both directions. The forward and reverse reactions are competing with each other and differ in reaction rates. These rates depend on the concentration and therefore change with the time of the reaction: the reverse rate gradually increases and becomes equal to the rate of the forward reaction, establishing the so-called chemical equilibrium. The time to reach equilibrium depends on parameters such as temperature, pressure, and the materials involved, and is determined by the minimum free energy. In equilibrium, the Gibbs free energy of reaction must be zero. The pressure dependence can be explained with the Le Chatelier's principle. For example, an increase in pressure due to decreasing volume causes the reaction to shift to the side with fewer moles of gas.
The reaction yield stabilizes at equilibrium but can be increased by removing the product from the reaction mixture or changed by increasing the temperature or pressure. A change in the concentrations of the reactants does not affect the equilibrium constant but does affect the equilibrium position.
Thermodynamics
Chemical reactions are determined by the laws of thermodynamics. Reactions can proceed by themselves if they are exergonic, that is if they release free energy. The associated free energy change of the reaction is composed of the changes of two different thermodynamic quantities, enthalpy and entropy:
.
: free energy, : enthalpy, : temperature, : entropy, : difference (change between original and product)
Reactions can be exothermic, where ΔH is negative and energy is released. Typical examples of exothermic reactions are combustion, precipitation and crystallization, in which ordered solids are formed from disordered gaseous or liquid phases. In contrast, in endothermic reactions, heat is consumed from the environment. This can occur by increasing the entropy of the system, often through the formation of gaseous or dissolved reaction products, which have higher entropy. Since the entropy term in the free-energy change increases with temperature, many endothermic reactions preferably take place at high temperatures. On the contrary, many exothermic reactions such as crystallization occur preferably at lower temperatures. A change in temperature can sometimes reverse the sign of the enthalpy of a reaction, as for the carbon monoxide reduction of molybdenum dioxide:
2CO(g) + MoO2(s) -> 2CO2(g) + Mo(s);
This reaction to form carbon dioxide and molybdenum is endothermic at low temperatures, becoming less so with increasing temperature. ΔH° is zero at , and the reaction becomes exothermic above that temperature.
Changes in temperature can also reverse the direction tendency of a reaction. For example, the water gas shift reaction
CO(g) + H2O({v}) <=> CO2(g) + H2(g)
is favored by low temperatures, but its reverse is favored by high temperatures. The shift in reaction direction tendency occurs at .
Reactions can also be characterized by their internal energy change, which takes into account changes in the entropy, volume and chemical potentials. The latter depends, among other things, on the activities of the involved substances.
: internal energy, : entropy, : pressure, : chemical potential, : number of molecules, : small change sign
Kinetics
The speed at which reactions take place is studied by reaction kinetics. The rate depends on various parameters, such as:
Reactant concentrations, which usually make the reaction happen at a faster rate if raised through increased collisions per unit of time. Some reactions, however, have rates that are independent of reactant concentrations, due to a limited number of catalytic sites. These are called zero order reactions.
Surface area available for contact between the reactants, in particular solid ones in heterogeneous systems. Larger surface areas lead to higher reaction rates.
Pressure – increasing the pressure decreases the volume between molecules and therefore increases the frequency of collisions between the molecules.
Activation energy, which is defined as the amount of energy required to make the reaction start and carry on spontaneously. Higher activation energy implies that the reactants need more energy to start than a reaction with lower activation energy.
Temperature, which hastens reactions if raised, since higher temperature increases the energy of the molecules, creating more collisions per unit of time,
The presence or absence of a catalyst. Catalysts are substances that make weak bonds with reactants or intermediates and change the pathway (mechanism) of a reaction which in turn increases the speed of a reaction by lowering the activation energy needed for the reaction to take place. A catalyst is not destroyed or changed during a reaction, so it can be used again.
For some reactions, the presence of electromagnetic radiation, most notably ultraviolet light, is needed to promote the breaking of bonds to start the reaction. This is particularly true for reactions involving radicals.
Several theories allow calculating the reaction rates at the molecular level. This field is referred to as reaction dynamics. The rate v of a first-order reaction, which could be the disintegration of a substance A, is given by:
Its integration yields:
Here k is the first-order rate constant, having dimension 1/time, [A](t) is the concentration at a time t and [A]0 is the initial concentration. The rate of a first-order reaction depends only on the concentration and the properties of the involved substance, and the reaction itself can be described with a characteristic half-life. More than one time constant is needed when describing reactions of higher order. The temperature dependence of the rate constant usually follows the Arrhenius equation:
where Ea is the activation energy and kB is the Boltzmann constant. One of the simplest models of reaction rate is the collision theory. More realistic models are tailored to a specific problem and include the transition state theory, the calculation of the potential energy surface, the Marcus theory and the Rice–Ramsperger–Kassel–Marcus (RRKM) theory.
Reaction types
Four basic types
Synthesis
In a synthesis reaction, two or more simple substances combine to form a more complex substance. These reactions are in the general form:
A + B->AB
Two or more reactants yielding one product is another way to identify a synthesis reaction. One example of a synthesis reaction is the combination of iron and sulfur to form iron(II) sulfide:
8Fe + S8->8FeS
Another example is simple hydrogen gas combined with simple oxygen gas to produce a more complex substance, such as water.
Decomposition
A decomposition reaction is when a more complex substance breaks down into its more simple parts. It is thus the opposite of a synthesis reaction and can be written as
AB->A + B
One example of a decomposition reaction is the electrolysis of water to make oxygen and hydrogen gas:
2H2O->2H2 + O2
Single displacement
In a single displacement reaction, a single uncombined element replaces another in a compound; in other words, one element trades places with another element in a compound These reactions come in the general form of:
A + BC->AC + B
One example of a single displacement reaction is when magnesium replaces hydrogen in water to make solid magnesium hydroxide and hydrogen gas:
Mg + 2H2O->Mg(OH)2 (v) + H2 (^)
Double displacement
In a double displacement reaction, the anions and cations of two compounds switch places and form two entirely different compounds. These reactions are in the general form:
AB + CD->AD + CB
For example, when barium chloride (BaCl2) and magnesium sulfate (MgSO4) react, the SO42− anion switches places with the 2Cl− anion, giving the compounds BaSO4 and MgCl2.
Another example of a double displacement reaction is the reaction of lead(II) nitrate with potassium iodide to form lead(II) iodide and potassium nitrate:
Pb(NO3)2 + 2KI->PbI2(v) + 2KNO3
Forward and backward reactions
According to Le Chatelier's Principle, reactions may proceed in the forward or reverse direction until they end or reach equilibrium.
Forward reactions
Reactions that proceed in the forward direction (from left to right) to approach equilibrium are often called spontaneous reactions, that is, is negative, which means that if they occur at constant temperature and pressure, they decrease the Gibbs free energy of the reaction. They require less energy to proceed in the forward direction. Reactions are usually written as forward reactions in the direction in which they are spontaneous. Examples:
Reaction of hydrogen and oxygen to form water.
+
Dissociation of acetic acid in water into acetate ions and hydronium ions.
+ +
Backward reactions
Reactions that proceed in the backward direction to approach equilibrium are often called non-spontaneous reactions, that is, is positive, which means that if they occur at constant temperature and pressure, they increase the Gibbs free energy of the reaction. They require input of energy to proceed in the forward direction. Examples include:
Charging a normal DC battery (consisting of electrolytic cells) from an external electrical power source
Photosynthesis driven by absorption of electromagnetic radiation usually in the form of sunlight
+ + → +
Combustion
In a combustion reaction, an element or compound reacts with an oxidant, usually oxygen, often producing energy in the form of heat or light. Combustion reactions frequently involve a hydrocarbon. For instance, the combustion of 1 mole (114 g) of octane in oxygen
C8H18(l) + 25/2 O2(g)->8CO2 + 9H2O(l)
releases 5500 kJ. A combustion reaction can also result from carbon, magnesium or sulfur reacting with oxygen.
2Mg(s) + O2->2MgO(s)
S(s) + O2(g)->SO2(g)
Oxidation and reduction
Redox reactions can be understood in terms of the transfer of electrons from one involved species (reducing agent) to another (oxidizing agent). In this process, the former species is oxidized and the latter is reduced. Though sufficient for many purposes, these descriptions are not precisely correct. Oxidation is better defined as an increase in oxidation state of atoms and reduction as a decrease in oxidation state. In practice, the transfer of electrons will always change the oxidation state, but there are many reactions that are classed as "redox" even though no electron transfer occurs (such as those involving covalent bonds).
In the following redox reaction, hazardous sodium metal reacts with toxic chlorine gas to form the ionic compound sodium chloride, or common table salt:
2Na(s) + Cl2(g)->2NaCl(s)
In the reaction, sodium metal goes from an oxidation state of 0 (a pure element) to +1: in other words, the sodium lost one electron and is said to have been oxidized. On the other hand, the chlorine gas goes from an oxidation of 0 (also a pure element) to −1: the chlorine gains one electron and is said to have been reduced. Because the chlorine is the one reduced, it is considered the electron acceptor, or in other words, induces oxidation in the sodium – thus the chlorine gas is considered the oxidizing agent. Conversely, the sodium is oxidized or is the electron donor, and thus induces a reduction in the other species and is considered the reducing agent.
Which of the involved reactants would be a reducing or oxidizing agent can be predicted from the electronegativity of their elements. Elements with low electronegativities, such as most metals, easily donate electrons and oxidize – they are reducing agents. On the contrary, many oxides or ions with high oxidation numbers of their non-oxygen atoms, such as , , , , or , can gain one or two extra electrons and are strong oxidizing agents.
For some main-group elements the number of electrons donated or accepted in a redox reaction can be predicted from the electron configuration of the reactant element. Elements try to reach the low-energy noble gas configuration, and therefore alkali metals and halogens will donate and accept one electron, respectively. Noble gases themselves are chemically inactive.
The overall redox reaction can be balanced by combining the oxidation and reduction half-reactions multiplied by coefficients such that the number of electrons lost in the oxidation equals the number of electrons gained in the reduction.
An important class of redox reactions are the electrolytic electrochemical reactions, where electrons from the power supply at the negative electrode are used as the reducing agent and electron withdrawal at the positive electrode as the oxidizing agent. These reactions are particularly important for the production of chemical elements, such as chlorine or aluminium. The reverse process, in which electrons are released in redox reactions and chemical energy is converted to electrical energy, is possible and used in batteries.
Complexation
In complexation reactions, several ligands react with a metal atom to form a coordination complex. This is achieved by providing lone pairs of the ligand into empty orbitals of the metal atom and forming dipolar bonds. The ligands are Lewis bases, they can be both ions and neutral molecules, such as carbon monoxide, ammonia or water. The number of ligands that react with a central metal atom can be found using the 18-electron rule, saying that the valence shells of a transition metal will collectively accommodate 18 electrons, whereas the symmetry of the resulting complex can be predicted with the crystal field theory and ligand field theory. Complexation reactions also include ligand exchange, in which one or more ligands are replaced by another, and redox processes which change the oxidation state of the central metal atom.
Acid–base reactions
In the Brønsted–Lowry acid–base theory, an acid–base reaction involves a transfer of protons (H+) from one species (the acid) to another (the base). When a proton is removed from an acid, the resulting species is termed that acid's conjugate base. When the proton is accepted by a base, the resulting species is termed that base's conjugate acid. In other words, acids act as proton donors and bases act as proton acceptors according to the following equation:
\underset{acid}{HA} + \underset{base}{B} <=> \underset{conjugated\ base}{A^-} + \underset{conjugated\ acid}{HB+}
The reverse reaction is possible, and thus the acid/base and conjugated base/acid are always in equilibrium. The equilibrium is determined by the acid and base dissociation constants (Ka and Kb) of the involved substances. A special case of the acid-base reaction is the neutralization where an acid and a base, taken at the exact same amounts, form a neutral salt.
Acid-base reactions can have different definitions depending on the acid-base concept employed. Some of the most common are:
Arrhenius definition: Acids dissociate in water releasing H3O+ ions; bases dissociate in water releasing OH− ions.
Brønsted–Lowry definition: Acids are proton (H+) donors, bases are proton acceptors; this includes the Arrhenius definition.
Lewis definition: Acids are electron-pair acceptors, and bases are electron-pair donors; this includes the Brønsted-Lowry definition.
Precipitation
Precipitation is the formation of a solid in a solution or inside another solid during a chemical reaction. It usually takes place when the concentration of dissolved ions exceeds the solubility limit and forms an insoluble salt. This process can be assisted by adding a precipitating agent or by the removal of the solvent. Rapid precipitation results in an amorphous or microcrystalline residue and a slow process can yield single crystals. The latter can also be obtained by recrystallization from microcrystalline salts.
Solid-state reactions
Reactions can take place between two solids. However, because of the relatively small diffusion rates in solids, the corresponding chemical reactions are very slow in comparison to liquid and gas phase reactions. They are accelerated by increasing the reaction temperature and finely dividing the reactant to increase the contacting surface area.
Reactions at the solid/gas interface
The reaction can take place at the solid|gas interface, surfaces at very low pressure such as ultra-high vacuum. Via scanning tunneling microscopy, it is possible to observe reactions at the solid|gas interface in real space, if the time scale of the reaction is in the correct range. Reactions at the solid|gas interface are in some cases related to catalysis.
Photochemical reactions
In photochemical reactions, atoms and molecules absorb energy (photons) of the illumination light and convert it into an excited state. They can then release this energy by breaking chemical bonds, thereby producing radicals. Photochemical reactions include hydrogen–oxygen reactions, radical polymerization, chain reactions and rearrangement reactions.
Many important processes involve photochemistry. The premier example is photosynthesis, in which most plants use solar energy to convert carbon dioxide and water into glucose, disposing of oxygen as a side-product. Humans rely on photochemistry for the formation of vitamin D, and vision is initiated by a photochemical reaction of rhodopsin. In fireflies, an enzyme in the abdomen catalyzes a reaction that results in bioluminescence. Many significant photochemical reactions, such as ozone formation, occur in the Earth atmosphere and constitute atmospheric chemistry.
Catalysis
In catalysis, the reaction does not proceed directly, but through a reaction with a third substance known as catalyst. Although the catalyst takes part in the reaction, forming weak bonds with reactants or intermediates, it is returned to its original state by the end of the reaction and so is not consumed. However, it can be inhibited, deactivated or destroyed by secondary processes. Catalysts can be used in a different phase (heterogeneous) or in the same phase (homogeneous) as the reactants. In heterogeneous catalysis, typical secondary processes include coking where the catalyst becomes covered by polymeric side products. Additionally, heterogeneous catalysts can dissolve into the solution in a solid-liquid system or evaporate in a solid–gas system. Catalysts can only speed up the reaction – chemicals that slow down the reaction are called inhibitors. Substances that increase the activity of catalysts are called promoters, and substances that deactivate catalysts are called catalytic poisons. With a catalyst, a reaction that is kinetically inhibited by high activation energy can take place in the circumvention of this activation energy.
Heterogeneous catalysts are usually solids, powdered in order to maximize their surface area. Of particular importance in heterogeneous catalysis are the platinum group metals and other transition metals, which are used in hydrogenations, catalytic reforming and in the synthesis of commodity chemicals such as nitric acid and ammonia. Acids are an example of a homogeneous catalyst, they increase the nucleophilicity of carbonyls, allowing a reaction that would not otherwise proceed with electrophiles. The advantage of homogeneous catalysts is the ease of mixing them with the reactants, but they may also be difficult to separate from the products. Therefore, heterogeneous catalysts are preferred in many industrial processes.
Reactions in organic chemistry
In organic chemistry, in addition to oxidation, reduction or acid-base reactions, a number of other reactions can take place which involves covalent bonds between carbon atoms or carbon and heteroatoms (such as oxygen, nitrogen, halogens, etc.). Many specific reactions in organic chemistry are name reactions designated after their discoverers.
One of the most industrially important reactions is the cracking of heavy hydrocarbons at oil refineries to create smaller, simpler molecules. This process is used to manufacture gasoline. Specific types of organic reactions may be grouped by their reaction mechanisms (particularly substitution, addition and elimination) or by the types of products they produce (for example, methylation, polymerisation and halogenation).
Substitution
In a substitution reaction, a functional group in a particular chemical compound is replaced by another group. These reactions can be distinguished by the type of substituting species into a nucleophilic, electrophilic or radical substitution.
In the first type, a nucleophile, an atom or molecule with an excess of electrons and thus a negative charge or partial charge, replaces another atom or part of the "substrate" molecule. The electron pair from the nucleophile attacks the substrate forming a new bond, while the leaving group departs with an electron pair. The nucleophile may be electrically neutral or negatively charged, whereas the substrate is typically neutral or positively charged. Examples of nucleophiles are hydroxide ion, alkoxides, amines and halides. This type of reaction is found mainly in aliphatic hydrocarbons, and rarely in aromatic hydrocarbon. The latter have high electron density and enter nucleophilic aromatic substitution only with very strong electron withdrawing groups. Nucleophilic substitution can take place by two different mechanisms, SN1 and SN2. In their names, S stands for substitution, N for nucleophilic, and the number represents the kinetic order of the reaction, unimolecular or bimolecular.
The SN1 reaction proceeds in two steps. First, the leaving group is eliminated creating a carbocation. This is followed by a rapid reaction with the nucleophile.
In the SN2 mechanisms, the nucleophile forms a transition state with the attacked molecule, and only then the leaving group is cleaved. These two mechanisms differ in the stereochemistry of the products. SN1 leads to the non-stereospecific addition and does not result in a chiral center, but rather in a set of geometric isomers (cis/trans). In contrast, a reversal (Walden inversion) of the previously existing stereochemistry is observed in the SN2 mechanism.
Electrophilic substitution is the counterpart of the nucleophilic substitution in that the attacking atom or molecule, an electrophile, has low electron density and thus a positive charge. Typical electrophiles are the carbon atom of carbonyl groups, carbocations or sulfur or nitronium cations. This reaction takes place almost exclusively in aromatic hydrocarbons, where it is called electrophilic aromatic substitution. The electrophile attack results in the so-called σ-complex, a transition state in which the aromatic system is abolished. Then, the leaving group, usually a proton, is split off and the aromaticity is restored. An alternative to aromatic substitution is electrophilic aliphatic substitution. It is similar to the nucleophilic aliphatic substitution and also has two major types, SE1 and SE2
In the third type of substitution reaction, radical substitution, the attacking particle is a radical. This process usually takes the form of a chain reaction, for example in the reaction of alkanes with halogens. In the first step, light or heat disintegrates the halogen-containing molecules producing radicals. Then the reaction proceeds as an avalanche until two radicals meet and recombine.
X. + R-H -> X-H + R.
R. + X2 -> R-X + X.
Reactions during the chain reaction of radical substitution
Addition and elimination
The addition and its counterpart, the elimination, are reactions that change the number of substituents on the carbon atom, and form or cleave multiple bonds. Double and triple bonds can be produced by eliminating a suitable leaving group. Similar to the nucleophilic substitution, there are several possible reaction mechanisms that are named after the respective reaction order. In the E1 mechanism, the leaving group is ejected first, forming a carbocation. The next step, the formation of the double bond, takes place with the elimination of a proton (deprotonation). The leaving order is reversed in the E1cb mechanism, that is the proton is split off first. This mechanism requires the participation of a base. Because of the similar conditions, both reactions in the E1 or E1cb elimination always compete with the SN1 substitution.
The E2 mechanism also requires a base, but there the attack of the base and the elimination of the leaving group proceed simultaneously and produce no ionic intermediate. In contrast to the E1 eliminations, different stereochemical configurations are possible for the reaction product in the E2 mechanism, because the attack of the base preferentially occurs in the anti-position with respect to the leaving group. Because of the similar conditions and reagents, the E2 elimination is always in competition with the SN2-substitution.
The counterpart of elimination is an addition where double or triple bonds are converted into single bonds. Similar to substitution reactions, there are several types of additions distinguished by the type of the attacking particle. For example, in the electrophilic addition of hydrogen bromide, an electrophile (proton) attacks the double bond forming a carbocation, which then reacts with the nucleophile (bromine). The carbocation can be formed on either side of the double bond depending on the groups attached to its ends, and the preferred configuration can be predicted with the Markovnikov's rule. This rule states that "In the heterolytic addition of a polar molecule to an alkene or alkyne, the more electronegative (nucleophilic) atom (or part) of the polar molecule becomes attached to the carbon atom bearing the smaller number of hydrogen atoms."
If the addition of a functional group takes place at the less substituted carbon atom of the double bond, then the electrophilic substitution with acids is not possible. In this case, one has to use the hydroboration–oxidation reaction, wherein the first step, the boron atom acts as electrophile and adds to the less substituted carbon atom. In the second step, the nucleophilic hydroperoxide or halogen anion attacks the boron atom.
While the addition to the electron-rich alkenes and alkynes is mainly electrophilic, the nucleophilic addition plays an important role in the carbon-heteroatom multiple bonds, and especially its most important representative, the carbonyl group. This process is often associated with elimination so that after the reaction the carbonyl group is present again. It is, therefore, called an addition-elimination reaction and may occur in carboxylic acid derivatives such as chlorides, esters or anhydrides. This reaction is often catalyzed by acids or bases, where the acids increase the electrophilicity of the carbonyl group by binding to the oxygen atom, whereas the bases enhance the nucleophilicity of the attacking nucleophile.
Nucleophilic addition of a carbanion or another nucleophile to the double bond of an alpha, beta-unsaturated carbonyl compound can proceed via the Michael reaction, which belongs to the larger class of conjugate additions. This is one of the most useful methods for the mild formation of C–C bonds.
Some additions which can not be executed with nucleophiles and electrophiles can be succeeded with free radicals. As with the free-radical substitution, the radical addition proceeds as a chain reaction, and such reactions are the basis of the free-radical polymerization.
Other organic reaction mechanisms
In a rearrangement reaction, the carbon skeleton of a molecule is rearranged to give a structural isomer of the original molecule. These include hydride shift reactions such as the Wagner-Meerwein rearrangement, where a hydrogen, alkyl or aryl group migrates from one carbon to a neighboring carbon. Most rearrangements are associated with the breaking and formation of new carbon-carbon bonds. Other examples are sigmatropic reaction such as the Cope rearrangement.
Cyclic rearrangements include cycloadditions and, more generally, pericyclic reactions, wherein two or more double bond-containing molecules form a cyclic molecule. An important example of cycloaddition reaction is the Diels–Alder reaction (the so-called [4+2] cycloaddition) between a conjugated diene and a substituted alkene to form a substituted cyclohexene system.
Whether a certain cycloaddition would proceed depends on the electronic orbitals of the participating species, as only orbitals with the same sign of wave function will overlap and interact constructively to form new bonds. Cycloaddition is usually assisted by light or heat. These perturbations result in a different arrangement of electrons in the excited state of the involved molecules and therefore in different effects. For example, the [4+2] Diels-Alder reactions can be assisted by heat whereas the [2+2] cycloaddition is selectively induced by light. Because of the orbital character, the potential for developing stereoisomeric products upon cycloaddition is limited, as described by the Woodward–Hoffmann rules.
Biochemical reactions
Biochemical reactions are mainly controlled by complex proteins called enzymes, which are usually specialized to catalyze only a single, specific reaction. The reaction takes place in the active site, a small part of the enzyme which is usually found in a cleft or pocket lined by amino acid residues, and the rest of the enzyme is used mainly for stabilization. The catalytic action of enzymes relies on several mechanisms including the molecular shape ("induced fit"), bond strain, proximity and orientation of molecules relative to the enzyme, proton donation or withdrawal (acid/base catalysis), electrostatic interactions and many others.
The biochemical reactions that occur in living organisms are collectively known as metabolism. Among the most important of its mechanisms is the anabolism, in which different DNA and enzyme-controlled processes result in the production of large molecules such as proteins and carbohydrates from smaller units. Bioenergetics studies the sources of energy for such reactions. Important energy sources are glucose and oxygen, which can be produced by plants via photosynthesis or assimilated from food and air, respectively. All organisms use this energy to produce adenosine triphosphate (ATP), which can then be used to energize other reactions. Decomposition of organic material by fungi, bacteria and other micro-organisms is also within the scope of biochemistry.
Applications
Chemical reactions are central to chemical engineering, where they are used for the synthesis of new compounds from natural raw materials such as petroleum, mineral ores, and oxygen in air. It is essential to make the reaction as efficient as possible, maximizing the yield and minimizing the number of reagents, energy inputs and waste. Catalysts are especially helpful for reducing the energy required for the reaction and increasing its reaction rate.
Some specific reactions have their niche applications. For example, the thermite reaction is used to generate light and heat in pyrotechnics and welding. Although it is less controllable than the more conventional oxy-fuel welding, arc welding and flash welding, it requires much less equipment and is still used to mend rails, especially in remote areas.
Monitoring
Mechanisms of monitoring chemical reactions depend strongly on the reaction rate. Relatively slow processes can be analyzed in situ for the concentrations and identities of the individual ingredients. Important tools of real-time analysis are the measurement of pH and analysis of optical absorption (color) and emission spectra. A less accessible but rather efficient method is the introduction of a radioactive isotope into the reaction and monitoring how it changes over time and where it moves to; this method is often used to analyze the redistribution of substances in the human body. Faster reactions are usually studied with ultrafast laser spectroscopy where utilization of femtosecond lasers allows short-lived transition states to be monitored at a time scaled down to a few femtoseconds.
See also
Chemical equation
Chemical reaction
Substrate
Reagent
Catalyst
Product
Chemical reaction model
Chemist
Chemistry
Combustion
Limiting reagent
List of organic reactions
Mass balance
Microscopic reversibility
Organic reaction
Reaction progress kinetic analysis
Reversible reaction
References
Bibliography
Chemistry
Change | 0.814223 | 0.998671 | 0.813141 |
Chemistry | Chemistry is the scientific study of the properties and behavior of matter. It is a physical science within the natural sciences that studies the chemical elements that make up matter and compounds made of atoms, molecules and ions: their composition, structure, properties, behavior and the changes they undergo during reactions with other substances. Chemistry also addresses the nature of chemical bonds in chemical compounds.
In the scope of its subject, chemistry occupies an intermediate position between physics and biology. It is sometimes called the central science because it provides a foundation for understanding both basic and applied scientific disciplines at a fundamental level. For example, chemistry explains aspects of plant growth (botany), the formation of igneous rocks (geology), how atmospheric ozone is formed and how environmental pollutants are degraded (ecology), the properties of the soil on the Moon (cosmochemistry), how medications work (pharmacology), and how to collect DNA evidence at a crime scene (forensics).
Chemistry has existed under various names since ancient times. It has evolved, and now chemistry encompasses various areas of specialisation, or subdisciplines, that continue to increase in number and interrelate to create further interdisciplinary fields of study. The applications of various fields of chemistry are used frequently for economic purposes in the chemical industry.
Etymology
The word chemistry comes from a modification during the Renaissance of the word alchemy, which referred to an earlier set of practices that encompassed elements of chemistry, metallurgy, philosophy, astrology, astronomy, mysticism, and medicine. Alchemy is often associated with the quest to turn lead or other base metals into gold, though alchemists were also interested in many of the questions of modern chemistry.
The modern word alchemy in turn is derived from the Arabic word . This may have Egyptian origins since is derived from the Ancient Greek , which is in turn derived from the word , which is the ancient name of Egypt in the Egyptian language. Alternately, may derive from 'cast together'.
Modern principles
The current model of atomic structure is the quantum mechanical model. Traditional chemistry starts with the study of elementary particles, atoms, molecules, substances, metals, crystals and other aggregates of matter. Matter can be studied in solid, liquid, gas and plasma states, in isolation or in combination. The interactions, reactions and transformations that are studied in chemistry are usually the result of interactions between atoms, leading to rearrangements of the chemical bonds which hold atoms together. Such behaviors are studied in a chemistry laboratory.
The chemistry laboratory stereotypically uses various forms of laboratory glassware. However glassware is not central to chemistry, and a great deal of experimental (as well as applied/industrial) chemistry is done without it.
A chemical reaction is a transformation of some substances into one or more different substances. The basis of such a chemical transformation is the rearrangement of electrons in the chemical bonds between atoms. It can be symbolically depicted through a chemical equation, which usually involves atoms as subjects. The number of atoms on the left and the right in the equation for a chemical transformation is equal. (When the number of atoms on either side is unequal, the transformation is referred to as a nuclear reaction or radioactive decay.) The type of chemical reactions a substance may undergo and the energy changes that may accompany it are constrained by certain basic rules, known as chemical laws.
Energy and entropy considerations are invariably important in almost all chemical studies. Chemical substances are classified in terms of their structure, phase, as well as their chemical compositions. They can be analyzed using the tools of chemical analysis, e.g. spectroscopy and chromatography. Scientists engaged in chemical research are known as chemists. Most chemists specialize in one or more sub-disciplines. Several concepts are essential for the study of chemistry; some of them are:
Matter
In chemistry, matter is defined as anything that has rest mass and volume (it takes up space) and is made up of particles. The particles that make up matter have rest mass as well – not all particles have rest mass, such as the photon. Matter can be a pure chemical substance or a mixture of substances.
Atom
The atom is the basic unit of chemistry. It consists of a dense core called the atomic nucleus surrounded by a space occupied by an electron cloud. The nucleus is made up of positively charged protons and uncharged neutrons (together called nucleons), while the electron cloud consists of negatively charged electrons which orbit the nucleus. In a neutral atom, the negatively charged electrons balance out the positive charge of the protons. The nucleus is dense; the mass of a nucleon is approximately 1,836 times that of an electron, yet the radius of an atom is about 10,000 times that of its nucleus.
The atom is also the smallest entity that can be envisaged to retain the chemical properties of the element, such as electronegativity, ionization potential, preferred oxidation state(s), coordination number, and preferred types of bonds to form (e.g., metallic, ionic, covalent).
Element
A chemical element is a pure substance which is composed of a single type of atom, characterized by its particular number of protons in the nuclei of its atoms, known as the atomic number and represented by the symbol Z. The mass number is the sum of the number of protons and neutrons in a nucleus. Although all the nuclei of all atoms belonging to one element will have the same atomic number, they may not necessarily have the same mass number; atoms of an element which have different mass numbers are known as isotopes. For example, all atoms with 6 protons in their nuclei are atoms of the chemical element carbon, but atoms of carbon may have mass numbers of 12 or 13.
The standard presentation of the chemical elements is in the periodic table, which orders elements by atomic number. The periodic table is arranged in groups, or columns, and periods, or rows. The periodic table is useful in identifying periodic trends.
Compound
A compound is a pure chemical substance composed of more than one element. The properties of a compound bear little similarity to those of its elements. The standard nomenclature of compounds is set by the International Union of Pure and Applied Chemistry (IUPAC). Organic compounds are named according to the organic nomenclature system. The names for inorganic compounds are created according to the inorganic nomenclature system. When a compound has more than one component, then they are divided into two classes, the electropositive and the electronegative components. In addition the Chemical Abstracts Service has devised a method to index chemical substances. In this scheme each chemical substance is identifiable by a number known as its CAS registry number.
Molecule
A molecule is the smallest indivisible portion of a pure chemical substance that has its unique set of chemical properties, that is, its potential to undergo a certain set of chemical reactions with other substances. However, this definition only works well for substances that are composed of molecules, which is not true of many substances (see below). Molecules are typically a set of atoms bound together by covalent bonds, such that the structure is electrically neutral and all valence electrons are paired with other electrons either in bonds or in lone pairs.
Thus, molecules exist as electrically neutral units, unlike ions. When this rule is broken, giving the "molecule" a charge, the result is sometimes named a molecular ion or a polyatomic ion. However, the discrete and separate nature of the molecular concept usually requires that molecular ions be present only in well-separated form, such as a directed beam in a vacuum in a mass spectrometer. Charged polyatomic collections residing in solids (for example, common sulfate or nitrate ions) are generally not considered "molecules" in chemistry. Some molecules contain one or more unpaired electrons, creating radicals. Most radicals are comparatively reactive, but some, such as nitric oxide (NO) can be stable.
The "inert" or noble gas elements (helium, neon, argon, krypton, xenon and radon) are composed of lone atoms as their smallest discrete unit, but the other isolated chemical elements consist of either molecules or networks of atoms bonded to each other in some way. Identifiable molecules compose familiar substances such as water, air, and many organic compounds like alcohol, sugar, gasoline, and the various pharmaceuticals.
However, not all substances or chemical compounds consist of discrete molecules, and indeed most of the solid substances that make up the solid crust, mantle, and core of the Earth are chemical compounds without molecules. These other types of substances, such as ionic compounds and network solids, are organized in such a way as to lack the existence of identifiable molecules per se. Instead, these substances are discussed in terms of formula units or unit cells as the smallest repeating structure within the substance. Examples of such substances are mineral salts (such as table salt), solids like carbon and diamond, metals, and familiar silica and silicate minerals such as quartz and granite.
One of the main characteristics of a molecule is its geometry often called its structure. While the structure of diatomic, triatomic or tetra-atomic molecules may be trivial, (linear, angular pyramidal etc.) the structure of polyatomic molecules, that are constituted of more than six atoms (of several elements) can be crucial for its chemical nature.
Substance and mixture
A chemical substance is a kind of matter with a definite composition and set of properties. A collection of substances is called a mixture. Examples of mixtures are air and alloys.
Mole and amount of substance
The mole is a unit of measurement that denotes an amount of substance (also called chemical amount). One mole is defined to contain exactly particles (atoms, molecules, ions, or electrons), where the number of particles per mole is known as the Avogadro constant. Molar concentration is the amount of a particular substance per volume of solution, and is commonly reported in mol/dm3.
Phase
In addition to the specific chemical properties that distinguish different chemical classifications, chemicals can exist in several phases. For the most part, the chemical classifications are independent of these bulk phase classifications; however, some more exotic phases are incompatible with certain chemical properties. A phase is a set of states of a chemical system that have similar bulk structural properties, over a range of conditions, such as pressure or temperature.
Physical properties, such as density and refractive index tend to fall within values characteristic of the phase. The phase of matter is defined by the phase transition, which is when energy put into or taken out of the system goes into rearranging the structure of the system, instead of changing the bulk conditions.
Sometimes the distinction between phases can be continuous instead of having a discrete boundary' in this case the matter is considered to be in a supercritical state. When three states meet based on the conditions, it is known as a triple point and since this is invariant, it is a convenient way to define a set of conditions.
The most familiar examples of phases are solids, liquids, and gases. Many substances exhibit multiple solid phases. For example, there are three phases of solid iron (alpha, gamma, and delta) that vary based on temperature and pressure. A principal difference between solid phases is the crystal structure, or arrangement, of the atoms. Another phase commonly encountered in the study of chemistry is the aqueous phase, which is the state of substances dissolved in aqueous solution (that is, in water).
Less familiar phases include plasmas, Bose–Einstein condensates and fermionic condensates and the paramagnetic and ferromagnetic phases of magnetic materials. While most familiar phases deal with three-dimensional systems, it is also possible to define analogs in two-dimensional systems, which has received attention for its relevance to systems in biology.
Bonding
Atoms sticking together in molecules or crystals are said to be bonded with one another. A chemical bond may be visualized as the multipole balance between the positive charges in the nuclei and the negative charges oscillating about them. More than simple attraction and repulsion, the energies and distributions characterize the availability of an electron to bond to another atom.
The chemical bond can be a covalent bond, an ionic bond, a hydrogen bond or just because of Van der Waals force. Each of these kinds of bonds is ascribed to some potential. These potentials create the interactions which hold atoms together in molecules or crystals. In many simple compounds, valence bond theory, the Valence Shell Electron Pair Repulsion model (VSEPR), and the concept of oxidation number can be used to explain molecular structure and composition.
An ionic bond is formed when a metal loses one or more of its electrons, becoming a positively charged cation, and the electrons are then gained by the non-metal atom, becoming a negatively charged anion. The two oppositely charged ions attract one another, and the ionic bond is the electrostatic force of attraction between them. For example, sodium (Na), a metal, loses one electron to become an Na+ cation while chlorine (Cl), a non-metal, gains this electron to become Cl−. The ions are held together due to electrostatic attraction, and that compound sodium chloride (NaCl), or common table salt, is formed.
In a covalent bond, one or more pairs of valence electrons are shared by two atoms: the resulting electrically neutral group of bonded atoms is termed a molecule. Atoms will share valence electrons in such a way as to create a noble gas electron configuration (eight electrons in their outermost shell) for each atom. Atoms that tend to combine in such a way that they each have eight electrons in their valence shell are said to follow the octet rule. However, some elements like hydrogen and lithium need only two electrons in their outermost shell to attain this stable configuration; these atoms are said to follow the duet rule, and in this way they are reaching the electron configuration of the noble gas helium, which has two electrons in its outer shell.
Similarly, theories from classical physics can be used to predict many ionic structures. With more complicated compounds, such as metal complexes, valence bond theory is less applicable and alternative approaches, such as the molecular orbital theory, are generally used. See diagram on electronic orbitals.
Energy
In the context of chemistry, energy is an attribute of a substance as a consequence of its atomic, molecular or aggregate structure. Since a chemical transformation is accompanied by a change in one or more of these kinds of structures, it is invariably accompanied by an increase or decrease of energy of the substances involved. Some energy is transferred between the surroundings and the reactants of the reaction in the form of heat or light; thus the products of a reaction may have more or less energy than the reactants.
A reaction is said to be exergonic if the final state is lower on the energy scale than the initial state; in the case of endergonic reactions the situation is the reverse. A reaction is said to be exothermic if the reaction releases heat to the surroundings; in the case of endothermic reactions, the reaction absorbs heat from the surroundings.
Chemical reactions are invariably not possible unless the reactants surmount an energy barrier known as the activation energy. The speed of a chemical reaction (at given temperature T) is related to the activation energy E, by the Boltzmann's population factor – that is the probability of a molecule to have energy greater than or equal to E at the given temperature T. This exponential dependence of a reaction rate on temperature is known as the Arrhenius equation.
The activation energy necessary for a chemical reaction to occur can be in the form of heat, light, electricity or mechanical force in the form of ultrasound.
A related concept free energy, which also incorporates entropy considerations, is a very useful means for predicting the feasibility of a reaction and determining the state of equilibrium of a chemical reaction, in chemical thermodynamics. A reaction is feasible only if the total change in the Gibbs free energy is negative, ; if it is equal to zero the chemical reaction is said to be at equilibrium.
There exist only limited possible states of energy for electrons, atoms and molecules. These are determined by the rules of quantum mechanics, which require quantization of energy of a bound system. The atoms/molecules in a higher energy state are said to be excited. The molecules/atoms of substance in an excited energy state are often much more reactive; that is, more amenable to chemical reactions.
The phase of a substance is invariably determined by its energy and the energy of its surroundings. When the intermolecular forces of a substance are such that the energy of the surroundings is not sufficient to overcome them, it occurs in a more ordered phase like liquid or solid as is the case with water (H2O); a liquid at room temperature because its molecules are bound by hydrogen bonds. Whereas hydrogen sulfide (H2S) is a gas at room temperature and standard pressure, as its molecules are bound by weaker dipole–dipole interactions.
The transfer of energy from one chemical substance to another depends on the size of energy quanta emitted from one substance. However, heat energy is often transferred more easily from almost any substance to another because the phonons responsible for vibrational and rotational energy levels in a substance have much less energy than photons invoked for the electronic energy transfer. Thus, because vibrational and rotational energy levels are more closely spaced than electronic energy levels, heat is more easily transferred between substances relative to light or other forms of electronic energy. For example, ultraviolet electromagnetic radiation is not transferred with as much efficacy from one substance to another as thermal or electrical energy.
The existence of characteristic energy levels for different chemical substances is useful for their identification by the analysis of spectral lines. Different kinds of spectra are often used in chemical spectroscopy, e.g. IR, microwave, NMR, ESR, etc. Spectroscopy is also used to identify the composition of remote objects – like stars and distant galaxies – by analyzing their radiation spectra.
The term chemical energy is often used to indicate the potential of a chemical substance to undergo a transformation through a chemical reaction or to transform other chemical substances.
Reaction
When a chemical substance is transformed as a result of its interaction with another substance or with energy, a chemical reaction is said to have occurred. A chemical reaction is therefore a concept related to the "reaction" of a substance when it comes in close contact with another, whether as a mixture or a solution; exposure to some form of energy, or both. It results in some energy exchange between the constituents of the reaction as well as with the system environment, which may be designed vessels—often laboratory glassware.
Chemical reactions can result in the formation or dissociation of molecules, that is, molecules breaking apart to form two or more molecules or rearrangement of atoms within or across molecules. Chemical reactions usually involve the making or breaking of chemical bonds. Oxidation, reduction, dissociation, acid–base neutralization and molecular rearrangement are some examples of common chemical reactions.
A chemical reaction can be symbolically depicted through a chemical equation. While in a non-nuclear chemical reaction the number and kind of atoms on both sides of the equation are equal, for a nuclear reaction this holds true only for the nuclear particles viz. protons and neutrons.
The sequence of steps in which the reorganization of chemical bonds may be taking place in the course of a chemical reaction is called its mechanism. A chemical reaction can be envisioned to take place in a number of steps, each of which may have a different speed. Many reaction intermediates with variable stability can thus be envisaged during the course of a reaction. Reaction mechanisms are proposed to explain the kinetics and the relative product mix of a reaction. Many physical chemists specialize in exploring and proposing the mechanisms of various chemical reactions. Several empirical rules, like the Woodward–Hoffmann rules often come in handy while proposing a mechanism for a chemical reaction.
According to the IUPAC gold book, a chemical reaction is "a process that results in the interconversion of chemical species." Accordingly, a chemical reaction may be an elementary reaction or a stepwise reaction. An additional caveat is made, in that this definition includes cases where the interconversion of conformers is experimentally observable. Such detectable chemical reactions normally involve sets of molecular entities as indicated by this definition, but it is often conceptually convenient to use the term also for changes involving single molecular entities (i.e. 'microscopic chemical events').
Ions and salts
An ion is a charged species, an atom or a molecule, that has lost or gained one or more electrons. When an atom loses an electron and thus has more protons than electrons, the atom is a positively charged ion or cation. When an atom gains an electron and thus has more electrons than protons, the atom is a negatively charged ion or anion. Cations and anions can form a crystalline lattice of neutral salts, such as the Na+ and Cl− ions forming sodium chloride, or NaCl. Examples of polyatomic ions that do not split up during acid–base reactions are hydroxide (OH−) and phosphate (PO43−).
Plasma is composed of gaseous matter that has been completely ionized, usually through high temperature.
Acidity and basicity
A substance can often be classified as an acid or a base. There are several different theories which explain acid–base behavior. The simplest is Arrhenius theory, which states that acid is a substance that produces hydronium ions when it is dissolved in water, and a base is one that produces hydroxide ions when dissolved in water. According to Brønsted–Lowry acid–base theory, acids are substances that donate a positive hydrogen ion to another substance in a chemical reaction; by extension, a base is the substance which receives that hydrogen ion.
A third common theory is Lewis acid–base theory, which is based on the formation of new chemical bonds. Lewis theory explains that an acid is a substance which is capable of accepting a pair of electrons from another substance during the process of bond formation, while a base is a substance which can provide a pair of electrons to form a new bond. There are several other ways in which a substance may be classified as an acid or a base, as is evident in the history of this concept.
Acid strength is commonly measured by two methods. One measurement, based on the Arrhenius definition of acidity, is pH, which is a measurement of the hydronium ion concentration in a solution, as expressed on a negative logarithmic scale. Thus, solutions that have a low pH have a high hydronium ion concentration and can be said to be more acidic. The other measurement, based on the Brønsted–Lowry definition, is the acid dissociation constant (Ka), which measures the relative ability of a substance to act as an acid under the Brønsted–Lowry definition of an acid. That is, substances with a higher Ka are more likely to donate hydrogen ions in chemical reactions than those with lower Ka values.
Redox
Redox (-) reactions include all chemical reactions in which atoms have their oxidation state changed by either gaining electrons (reduction) or losing electrons (oxidation). Substances that have the ability to oxidize other substances are said to be oxidative and are known as oxidizing agents, oxidants or oxidizers. An oxidant removes electrons from another substance. Similarly, substances that have the ability to reduce other substances are said to be reductive and are known as reducing agents, reductants, or reducers.
A reductant transfers electrons to another substance and is thus oxidized itself. And because it "donates" electrons it is also called an electron donor. Oxidation and reduction properly refer to a change in oxidation number—the actual transfer of electrons may never occur. Thus, oxidation is better defined as an increase in oxidation number, and reduction as a decrease in oxidation number.
Equilibrium
Although the concept of equilibrium is widely used across sciences, in the context of chemistry, it arises whenever a number of different states of the chemical composition are possible, as for example, in a mixture of several chemical compounds that can react with one another, or when a substance can be present in more than one kind of phase.
A system of chemical substances at equilibrium, even though having an unchanging composition, is most often not static; molecules of the substances continue to react with one another thus giving rise to a dynamic equilibrium. Thus the concept describes the state in which the parameters such as chemical composition remain unchanged over time.
Chemical laws
Chemical reactions are governed by certain laws, which have become fundamental concepts in chemistry. Some of them are:
Avogadro's law
Beer–Lambert law
Boyle's law (1662, relating pressure and volume)
Charles's law (1787, relating volume and temperature)
Fick's laws of diffusion
Gay-Lussac's law (1809, relating pressure and temperature)
Le Chatelier's principle
Henry's law
Hess's law
Law of conservation of energy leads to the important concepts of equilibrium, thermodynamics, and kinetics.
Law of conservation of mass continues to be conserved in isolated systems, even in modern physics. However, special relativity shows that due to mass–energy equivalence, whenever non-material "energy" (heat, light, kinetic energy) is removed from a non-isolated system, some mass will be lost with it. High energy losses result in loss of weighable amounts of mass, an important topic in nuclear chemistry.
Law of definite composition, although in many systems (notably biomacromolecules and minerals) the ratios tend to require large numbers, and are frequently represented as a fraction.
Law of multiple proportions
Raoult's law
History
The history of chemistry spans a period from the ancient past to the present. Since several millennia BC, civilizations were using technologies that would eventually form the basis of the various branches of chemistry. Examples include extracting metals from ores, making pottery and glazes, fermenting beer and wine, extracting chemicals from plants for medicine and perfume, rendering fat into soap, making glass, and making alloys like bronze.
Chemistry was preceded by its protoscience, alchemy, which operated a non-scientific approach to understanding the constituents of matter and their interactions. Despite being unsuccessful in explaining the nature of matter and its transformations, alchemists set the stage for modern chemistry by performing experiments and recording the results. Robert Boyle, although skeptical of elements and convinced of alchemy, played a key part in elevating the "sacred art" as an independent, fundamental and philosophical discipline in his work The Sceptical Chymist (1661).
While both alchemy and chemistry are concerned with matter and its transformations, the crucial difference was given by the scientific method that chemists employed in their work. Chemistry, as a body of knowledge distinct from alchemy, became an established science with the work of Antoine Lavoisier, who developed a law of conservation of mass that demanded careful measurement and quantitative observations of chemical phenomena. The history of chemistry afterwards is intertwined with the history of thermodynamics, especially through the work of Willard Gibbs.
Definition
The definition of chemistry has changed over time, as new discoveries and theories add to the functionality of the science. The term "chymistry", in the view of noted scientist Robert Boyle in 1661, meant the subject of the material principles of mixed bodies. In 1663, the chemist Christopher Glaser described "chymistry" as a scientific art, by which one learns to dissolve bodies, and draw from them the different substances on their composition, and how to unite them again, and exalt them to a higher perfection.
The 1730 definition of the word "chemistry", as used by Georg Ernst Stahl, meant the art of resolving mixed, compound, or aggregate bodies into their principles; and of composing such bodies from those principles. In 1837, Jean-Baptiste Dumas considered the word "chemistry" to refer to the science concerned with the laws and effects of molecular forces. This definition further evolved until, in 1947, it came to mean the science of substances: their structure, their properties, and the reactions that change them into other substances – a characterization accepted by Linus Pauling. More recently, in 1998, Professor Raymond Chang broadened the definition of "chemistry" to mean the study of matter and the changes it undergoes.
Background
Early civilizations, such as the Egyptians Babylonians and Indians amassed practical knowledge concerning the arts of metallurgy, pottery and dyes, but did not develop a systematic theory.
A basic chemical hypothesis first emerged in Classical Greece with the theory of four elements as propounded definitively by Aristotle stating that fire, air, earth and water were the fundamental elements from which everything is formed as a combination. Greek atomism dates back to 440 BC, arising in works by philosophers such as Democritus and Epicurus. In 50 BCE, the Roman philosopher Lucretius expanded upon the theory in his poem De rerum natura (On The Nature of Things). Unlike modern concepts of science, Greek atomism was purely philosophical in nature, with little concern for empirical observations and no concern for chemical experiments.
An early form of the idea of conservation of mass is the notion that "Nothing comes from nothing" in Ancient Greek philosophy, which can be found in Empedocles (approx. 4th century BC): "For it is impossible for anything to come to be from what is not, and it cannot be brought about or heard of that what is should be utterly destroyed." and Epicurus (3rd century BC), who, describing the nature of the Universe, wrote that "the totality of things was always such as it is now, and always will be".
In the Hellenistic world the art of alchemy first proliferated, mingling magic and occultism into the study of natural substances with the ultimate goal of transmuting elements into gold and discovering the elixir of eternal life. Work, particularly the development of distillation, continued in the early Byzantine period with the most famous practitioner being the 4th century Greek-Egyptian Zosimos of Panopolis. Alchemy continued to be developed and practised throughout the Arab world after the Muslim conquests, and from there, and from the Byzantine remnants, diffused into medieval and Renaissance Europe through Latin translations.
The Arabic works attributed to Jabir ibn Hayyan introduced a systematic classification of chemical substances, and provided instructions for deriving an inorganic compound (sal ammoniac or ammonium chloride) from organic substances (such as plants, blood, and hair) by chemical means. Some Arabic Jabirian works (e.g., the "Book of Mercy", and the "Book of Seventy") were later translated into Latin under the Latinized name "Geber", and in 13th-century Europe an anonymous writer, usually referred to as pseudo-Geber, started to produce alchemical and metallurgical writings under this name. Later influential Muslim philosophers, such as Abū al-Rayhān al-Bīrūnī and Avicenna disputed the theories of alchemy, particularly the theory of the transmutation of metals.
Improvements of the refining of ores and their extractions to smelt metals was widely used source of information for early chemists in the 16th century, among them Georg Agricola (1494–1555), who published his major work De re metallica in 1556. His work, describing highly developed and complex processes of mining metal ores and metal extraction, were the pinnacle of metallurgy during that time. His approach removed all mysticism associated with the subject, creating the practical base upon which others could and would build. The work describes the many kinds of furnace used to smelt ore, and stimulated interest in minerals and their composition. Agricola has been described as the "father of metallurgy" and the founder of geology as a scientific discipline.
Under the influence of the new empirical methods propounded by Sir Francis Bacon and others, a group of chemists at Oxford, Robert Boyle, Robert Hooke and John Mayow began to reshape the old alchemical traditions into a scientific discipline. Boyle in particular questioned some commonly held chemical theories and argued for chemical practitioners to be more "philosophical" and less commercially focused in The Sceptical Chemyst. He formulated Boyle's law, rejected the classical "four elements" and proposed a mechanistic alternative of atoms and chemical reactions that could be subject to rigorous experiment.
In the following decades, many important discoveries were made, such as the nature of 'air' which was discovered to be composed of many different gases. The Scottish chemist Joseph Black and the Flemish Jan Baptist van Helmont discovered carbon dioxide, or what Black called 'fixed air' in 1754; Henry Cavendish discovered hydrogen and elucidated its properties and Joseph Priestley and, independently, Carl Wilhelm Scheele isolated pure oxygen. The theory of phlogiston (a substance at the root of all combustion) was propounded by the German Georg Ernst Stahl in the early 18th century and was only overturned by the end of the century by the French chemist Antoine Lavoisier, the chemical analogue of Newton in physics. Lavoisier did more than any other to establish the new science on proper theoretical footing, by elucidating the principle of conservation of mass and developing a new system of chemical nomenclature used to this day.
English scientist John Dalton proposed the modern theory of atoms; that all substances are composed of indivisible 'atoms' of matter and that different atoms have varying atomic weights.
The development of the electrochemical theory of chemical combinations occurred in the early 19th century as the result of the work of two scientists in particular, Jöns Jacob Berzelius and Humphry Davy, made possible by the prior invention of the voltaic pile by Alessandro Volta. Davy discovered nine new elements including the alkali metals by extracting them from their oxides with electric current.
British William Prout first proposed ordering all the elements by their atomic weight as all atoms had a weight that was an exact multiple of the atomic weight of hydrogen. J.A.R. Newlands devised an early table of elements, which was then developed into the modern periodic table of elements in the 1860s by Dmitri Mendeleev and independently by several other scientists including Julius Lothar Meyer. The inert gases, later called the noble gases were discovered by William Ramsay in collaboration with Lord Rayleigh at the end of the century, thereby filling in the basic structure of the table.
At the turn of the twentieth century the theoretical underpinnings of chemistry were finally understood due to a series of remarkable discoveries that succeeded in probing and discovering the very nature of the internal structure of atoms. In 1897, J.J. Thomson of the University of Cambridge discovered the electron and soon after the French scientist Becquerel as well as the couple Pierre and Marie Curie investigated the phenomenon of radioactivity. In a series of pioneering scattering experiments Ernest Rutherford at the University of Manchester discovered the internal structure of the atom and the existence of the proton, classified and explained the different types of radioactivity and successfully transmuted the first element by bombarding nitrogen with alpha particles.
His work on atomic structure was improved on by his students, the Danish physicist Niels Bohr, the Englishman Henry Moseley and the German Otto Hahn, who went on to father the emerging nuclear chemistry and discovered nuclear fission. The electronic theory of chemical bonds and molecular orbitals was developed by the American scientists Linus Pauling and Gilbert N. Lewis.
The year 2011 was declared by the United Nations as the International Year of Chemistry. It was an initiative of the International Union of Pure and Applied Chemistry, and of the United Nations Educational, Scientific, and Cultural Organization and involves chemical societies, academics, and institutions worldwide and relied on individual initiatives to organize local and regional activities.
Organic chemistry was developed by Justus von Liebig and others, following Friedrich Wöhler's synthesis of urea. Other crucial 19th century advances were; an understanding of valence bonding (Edward Frankland in 1852) and the application of thermodynamics to chemistry (J. W. Gibbs and Svante Arrhenius in the 1870s).
Practice
In the practice of chemistry, pure chemistry is the study of the fundamental principles of chemistry, while applied chemistry applies that knowledge to develop technology and solve real-world problems.
Subdisciplines
Chemistry is typically divided into several major sub-disciplines. There are also several main cross-disciplinary and more specialized fields of chemistry.
Analytical chemistry is the analysis of material samples to gain an understanding of their chemical composition and structure. Analytical chemistry incorporates standardized experimental methods in chemistry. These methods may be used in all subdisciplines of chemistry, excluding purely theoretical chemistry.
Biochemistry is the study of the chemicals, chemical reactions and interactions that take place at a molecular level in living organisms. Biochemistry is highly interdisciplinary, covering medicinal chemistry, neurochemistry, molecular biology, forensics, plant science and genetics.
Inorganic chemistry is the study of the properties and reactions of inorganic compounds, such as metals and minerals. The distinction between organic and inorganic disciplines is not absolute and there is much overlap, most importantly in the sub-discipline of organometallic chemistry.
Materials chemistry is the preparation, characterization, and understanding of solid state components or devices with a useful current or future function. The field is a new breadth of study in graduate programs, and it integrates elements from all classical areas of chemistry like organic chemistry, inorganic chemistry, and crystallography with a focus on fundamental issues that are unique to materials. Primary systems of study include the chemistry of condensed phases (solids, liquids, polymers) and interfaces between different phases.
Neurochemistry is the study of neurochemicals; including transmitters, peptides, proteins, lipids, sugars, and nucleic acids; their interactions, and the roles they play in forming, maintaining, and modifying the nervous system.
Nuclear chemistry is the study of how subatomic particles come together and make nuclei. Modern transmutation is a large component of nuclear chemistry, and the table of nuclides is an important result and tool for this field. In addition to medical applications, nuclear chemistry encompasses nuclear engineering which explores the topic of using nuclear power sources for generating energy
Organic chemistry is the study of the structure, properties, composition, mechanisms, and reactions of organic compounds. An organic compound is defined as any compound based on a carbon skeleton. Organic compounds can be classified, organized and understood in reactions by their functional groups, unit atoms or molecules that show characteristic chemical properties in a compound.
Physical chemistry is the study of the physical and fundamental basis of chemical systems and processes. In particular, the energetics and dynamics of such systems and processes are of interest to physical chemists. Important areas of study include chemical thermodynamics, chemical kinetics, electrochemistry, statistical mechanics, spectroscopy, and more recently, astrochemistry. Physical chemistry has large overlap with molecular physics. Physical chemistry involves the use of infinitesimal calculus in deriving equations. It is usually associated with quantum chemistry and theoretical chemistry. Physical chemistry is a distinct discipline from chemical physics, but again, there is very strong overlap.
Theoretical chemistry is the study of chemistry via fundamental theoretical reasoning (usually within mathematics or physics). In particular the application of quantum mechanics to chemistry is called quantum chemistry. Since the end of the Second World War, the development of computers has allowed a systematic development of computational chemistry, which is the art of developing and applying computer programs for solving chemical problems. Theoretical chemistry has large overlap with (theoretical and experimental) condensed matter physics and molecular physics.
Others subdivisions include electrochemistry, femtochemistry, flavor chemistry, flow chemistry, immunohistochemistry, hydrogenation chemistry, mathematical chemistry, molecular mechanics, natural product chemistry, organometallic chemistry, petrochemistry, photochemistry, physical organic chemistry, polymer chemistry, radiochemistry, sonochemistry, supramolecular chemistry, synthetic chemistry, and many others.
Interdisciplinary
Interdisciplinary fields include agrochemistry, astrochemistry (and cosmochemistry), atmospheric chemistry, chemical engineering, chemical biology, chemo-informatics, environmental chemistry, geochemistry, green chemistry, immunochemistry, marine chemistry, materials science, mechanochemistry, medicinal chemistry, molecular biology, nanotechnology, oenology, pharmacology, phytochemistry, solid-state chemistry, surface science, thermochemistry, and many others.
Industry
The chemical industry represents an important economic activity worldwide. The global top 50 chemical producers in 2013 had sales of US$980.5 billion with a profit margin of 10.3%.
Professional societies
American Chemical Society
American Society for Neurochemistry
Chemical Institute of Canada
Chemical Society of Peru
International Union of Pure and Applied Chemistry
Royal Australian Chemical Institute
Royal Netherlands Chemical Society
Royal Society of Chemistry
Society of Chemical Industry
World Association of Theoretical and Computational Chemists
List of chemistry societies
See also
Comparison of software for molecular mechanics modeling
Glossary of chemistry terms
International Year of Chemistry
List of chemists
List of compounds
List of important publications in chemistry
List of unsolved problems in chemistry
Outline of chemistry
Periodic systems of small molecules
Philosophy of chemistry
Science tourism
References
Bibliography
Further reading
Popular reading
Atkins, P. W. Galileo's Finger (Oxford University Press)
Atkins, P. W. Atkins' Molecules (Cambridge University Press)
Kean, Sam. The Disappearing Spoon – and Other True Tales from the Periodic Table (Black Swan) London, England, 2010
Levi, Primo The Periodic Table (Penguin Books) [1975] translated from the Italian by Raymond Rosenthal (1984)
Stwertka, A. A Guide to the Elements (Oxford University Press)
Introductory undergraduate textbooks
Atkins, P.W., Overton, T., Rourke, J., Weller, M. and Armstrong, F. Shriver and Atkins Inorganic Chemistry (4th ed.) 2006 (Oxford University Press)
Chang, Raymond. Chemistry 6th ed. Boston, Massachusetts: James M. Smith, 1998.
Voet and Voet. Biochemistry (Wiley)
Advanced undergraduate-level or graduate textbooks
Atkins, P. W. Physical Chemistry (Oxford University Press)
Atkins, P. W. et al. Molecular Quantum Mechanics (Oxford University Press)
McWeeny, R. Coulson's Valence (Oxford Science Publications)
Pauling, L. The Nature of the chemical bond (Cornell University Press)
Pauling, L., and Wilson, E. B. Introduction to Quantum Mechanics with Applications to Chemistry (Dover Publications)
Smart and Moore. Solid State Chemistry: An Introduction (Chapman and Hall)
Stephenson, G. Mathematical Methods for Science Students (Longman)
External links
General Chemistry principles, patterns and applications. | 0.806816 | 0.999546 | 0.806449 |
Analysis | Analysis (: analyses) is the process of breaking a complex topic or substance into smaller parts in order to gain a better understanding of it. The technique has been applied in the study of mathematics and logic since before Aristotle (384–322 B.C.), though analysis as a formal concept is a relatively recent development.
The word comes from the Ancient Greek (analysis, "a breaking-up" or "an untying;" from ana- "up, throughout" and lysis "a loosening"). From it also comes the word's plural, analyses.
As a formal concept, the method has variously been ascribed to René Descartes (Discourse on the Method), and Galileo Galilei. It has also been ascribed to Isaac Newton, in the form of a practical method of physical discovery (which he did not name).
The converse of analysis is synthesis: putting the pieces back together again in a new or different whole.
Science and technology
Chemistry
The field of chemistry uses analysis in three ways: to identify the components of a particular chemical compound (qualitative analysis), to identify the proportions of components in a mixture (quantitative analysis), and to break down chemical processes and examine chemical reactions between elements of matter. For an example of its use, analysis of the concentration of elements is important in managing a nuclear reactor, so nuclear scientists will analyze neutron activation to develop discrete measurements within vast samples. A matrix can have a considerable effect on the way a chemical analysis is conducted and the quality of its results. Analysis can be done manually or with a device.
Types of Analysis
A) Qualitative Analysis: It is concerned with which components are in a given sample or compound.
Example: Precipitation reaction
B) Quantitative Analysis: It is to determine the quantity of individual component present in a given sample or compound.
Example: To find concentration by uv-spectrophotometer.
Isotopes
Chemists can use isotope analysis to assist analysts with issues in anthropology, archeology, food chemistry, forensics, geology, and a host of other questions of physical science. Analysts can discern the origins of natural and man-made isotopes in the study of environmental radioactivity.
Computer science
Requirements analysis – encompasses those tasks that go into determining the needs or conditions to meet for a new or altered product, taking account of the possibly conflicting requirements of the various stakeholders, such as beneficiaries or users.
Competitive analysis (online algorithm) – shows how online algorithms perform and demonstrates the power of randomization in algorithms
Lexical analysis – the process of processing an input sequence of characters and producing as output a sequence of symbols
Object-oriented analysis and design – à la Booch
Program analysis (computer science) – the process of automatically analysing the behavior of computer programs
Semantic analysis (computer science) – a pass by a compiler that adds semantical information to the parse tree and performs certain checks
Static code analysis – the analysis of computer software that is performed without actually executing programs built from that
Structured systems analysis and design methodology – à la Yourdon
Syntax analysis – a process in compilers that recognizes the structure of programming languages, also known as parsing
Worst-case execution time – determines the longest time that a piece of software can take to run
Engineering
Analysts in the field of engineering look at requirements, structures, mechanisms, systems and dimensions. Electrical engineers analyse systems in electronics. Life cycles and system failures are broken down and studied by engineers. It is also looking at different factors incorporated within the design.
Mathematics
Modern mathematical analysis is the study of infinite processes. It is the branch of mathematics that includes calculus. It can be applied in the study of classical concepts of mathematics, such as real numbers, complex variables, trigonometric functions, and algorithms, or of non-classical concepts like constructivism, harmonics, infinity, and vectors.
Florian Cajori explains in A History of Mathematics (1893) the difference between modern and ancient mathematical analysis, as distinct from logical analysis, as follows:
The terms synthesis and analysis are used in mathematics in a more special sense than in logic. In ancient mathematics they had a different meaning from what they now have. The oldest definition of mathematical analysis as opposed to synthesis is that given in [appended to] Euclid, XIII. 5, which in all probability was framed by Eudoxus: "Analysis is the obtaining of the thing sought by assuming it and so reasoning up to an admitted truth; synthesis is the obtaining of the thing sought by reasoning up to the inference and proof of it."
The analytic method is not conclusive, unless all operations involved in it are known to be reversible. To remove all doubt, the Greeks, as a rule, added to the analytic process a synthetic one, consisting of a reversion of all operations occurring in the analysis. Thus the aim of analysis was to aid in the discovery of synthetic proofs or solutions.
James Gow uses a similar argument as Cajori, with the following clarification, in his A Short History of Greek Mathematics (1884):
The synthetic proof proceeds by shewing that the proposed new truth involves certain admitted truths. An analytic proof begins by an assumption, upon which a synthetic reasoning is founded. The Greeks distinguished theoretic from problematic analysis. A theoretic analysis is of the following kind. To prove that A is B, assume first that A is B. If so, then, since B is C and C is D and D is E, therefore A is E. If this be known a falsity, A is not B. But if this be a known truth and all the intermediate propositions be convertible, then the reverse process, A is E, E is D, D is C, C is B, therefore A is B, constitutes a synthetic proof of the original theorem. Problematic analysis is applied in all cases where it is proposed to construct a figure which is assumed to satisfy a given condition. The problem is then converted into some theorem which is involved in the condition and which is proved synthetically, and the steps of this synthetic proof taken backwards are a synthetic solution of the problem.
Psychotherapy
Psychoanalysis – seeks to elucidate connections among unconscious components of patients' mental processes
Transactional analysis
Transactional analysis is used by therapists to try to gain a better understanding of the unconscious. It focuses on understanding and intervening human behavior.
Signal processing
Finite element analysis – a computer simulation technique used in engineering analysis
Independent component analysis
Link quality analysis – the analysis of signal quality
Path quality analysis
Fourier analysis
Statistics
In statistics, the term analysis may refer to any method used
for data analysis. Among the many such methods, some are:
Analysis of variance (ANOVA) – a collection of statistical models and their associated procedures which compare means by splitting the overall observed variance into different parts
Boolean analysis – a method to find deterministic dependencies between variables in a sample, mostly used in exploratory data analysis
Cluster analysis – techniques for finding groups (called clusters), based on some measure of proximity or similarity
Factor analysis – a method to construct models describing a data set of observed variables in terms of a smaller set of unobserved variables (called factors)
Meta-analysis – combines the results of several studies that address a set of related research hypotheses
Multivariate analysis – analysis of data involving several variables, such as by factor analysis, regression analysis, or principal component analysis
Principal component analysis – transformation of a sample of correlated variables into uncorrelated variables (called principal components), mostly used in exploratory data analysis
Regression analysis – techniques for analysing the relationships between several predictive variables and one or more outcomes in the data
Scale analysis (statistics) – methods to analyse survey data by scoring responses on a numeric scale
Sensitivity analysis – the study of how the variation in the output of a model depends on variations in the inputs
Sequential analysis – evaluation of sampled data as it is collected, until the criterion of a stopping rule is met
Spatial analysis – the study of entities using geometric or geographic properties
Time-series analysis – methods that attempt to understand a sequence of data points spaced apart at uniform time intervals
Business
Business
Financial statement analysis – the analysis of the accounts and the economic prospects of a firm
Financial analysis – refers to an assessment of the viability, stability, and profitability of a business, sub-business or project
Gap analysis – involves the comparison of actual performance with potential or desired performance of an organization
Business analysis – involves identifying the needs and determining the solutions to business problems
Price analysis – involves the breakdown of a price to a unit figure
Market analysis – consists of suppliers and customers, and price is determined by the interaction of supply and demand
Sum-of-the-parts analysis – method of valuation of a multi-divisional company
Opportunity analysis – consists of customers trends within the industry, customer demand and experience determine purchasing behavior
Economics
Agroecosystem analysis
Input–output model if applied to a region, is called Regional Impact Multiplier System
Government
Intelligence
The field of intelligence employs analysts to break down and understand a wide array of questions. Intelligence agencies may use heuristics, inductive and deductive reasoning, social network analysis, dynamic network analysis, link analysis, and brainstorming to sort through problems they face. Military intelligence may explore issues through the use of game theory, Red Teaming, and wargaming. Signals intelligence applies cryptanalysis and frequency analysis to break codes and ciphers. Business intelligence applies theories of competitive intelligence analysis and competitor analysis to resolve questions in the marketplace. Law enforcement intelligence applies a number of theories in crime analysis.
Policy
Policy analysis – The use of statistical data to predict the effects of policy decisions made by governments and agencies
Policy analysis includes a systematic process to find the most efficient and effective option to address the current situation.
Qualitative analysis – The use of anecdotal evidence to predict the effects of policy decisions or, more generally, influence policy decisions
Humanities and social sciences
Linguistics
Linguistics explores individual languages and language in general. It breaks language down and analyses its component parts: theory, sounds and their meaning, utterance usage, word origins, the history of words, the meaning of words and word combinations, sentence construction, basic construction beyond the sentence level, stylistics, and conversation. It examines the above using statistics and modeling, and semantics. It analyses language in context of anthropology, biology, evolution, geography, history, neurology, psychology, and sociology. It also takes the applied approach, looking at individual language development and clinical issues.
Literature
Literary criticism is the analysis of literature. The focus can be as diverse as the analysis of Homer or Freud. While not all literary-critical methods are primarily analytical in nature, the main approach to the teaching of literature in the west since the mid-twentieth century, literary formal analysis or close reading, is. This method, rooted in the academic movement labelled The New Criticism, approaches texts – chiefly short poems such as sonnets, which by virtue of their small size and significant complexity lend themselves well to this type of analysis – as units of discourse that can be understood in themselves, without reference to biographical or historical frameworks. This method of analysis breaks up the text linguistically in a study of prosody (the formal analysis of meter) and phonic effects such as alliteration and rhyme, and cognitively in examination of the interplay of syntactic structures, figurative language, and other elements of the poem that work to produce its larger effects.
Music
Musical analysis – a process attempting to answer the question "How does this music work?"
Musical Analysis is a study of how the composers use the notes together to compose music. Those studying music will find differences with each composer's musical analysis, which differs depending on the culture and history of music studied. An analysis of music is meant to simplify the music for you.
Schenkerian analysis
Schenkerian analysis is a collection of music analysis that focuses on the production of the graphic representation. This includes both analytical procedure as well as the notational style. Simply put, it analyzes tonal music which includes all chords and tones within a composition.
Philosophy
Philosophical analysis – a general term for the techniques used by philosophers
Philosophical analysis refers to the clarification and composition of words put together and the entailed meaning behind them. Philosophical analysis dives deeper into the meaning of words and seeks to clarify that meaning by contrasting the various definitions. It is the study of reality, justification of claims, and the analysis of various concepts. Branches of philosophy include logic, justification, metaphysics, values and ethics. If questions can be answered empirically, meaning it can be answered by using the senses, then it is not considered philosophical. Non-philosophical questions also include events that happened in the past, or questions science or mathematics can answer.
Analysis is the name of a prominent journal in philosophy.
Other
Aura analysis – a pseudoscientific technique in which supporters of the method claim that the body's aura, or energy field is analysed
Bowling analysis – Analysis of the performance of cricket players
Lithic analysis – the analysis of stone tools using basic scientific techniques
Lithic analysis is most often used by archeologists in determining which types of tools were used at a given time period pertaining to current artifacts discovered.
Protocol analysis – a means for extracting persons' thoughts while they are performing a task
See also
Formal analysis
Metabolism in biology
Methodology
Scientific method
References
External links
Abstraction
Critical thinking skills
Emergence
Empiricism
Epistemological theories
Intelligence
Mathematical modeling
Metaphysics of mind
Methodology
Ontology
Philosophy of logic
Rationalism
Reasoning
Research methods
Scientific method
Theory of mind | 0.806259 | 0.997095 | 0.803917 |
Anabolism | Anabolism is the set of metabolic pathways that construct macromolecules like DNA or RNA from smaller units. These reactions require energy, known also as an endergonic process. Anabolism is the building-up aspect of metabolism, whereas catabolism is the breaking-down aspect. Anabolism is usually synonymous with biosynthesis.
Pathway
Polymerization, an anabolic pathway used to build macromolecules such as nucleic acids, proteins, and polysaccharides, uses condensation reactions to join monomers. Macromolecules are created from smaller molecules using enzymes and cofactors.
Energy source
Anabolism is powered by catabolism, where large molecules are broken down into smaller parts and then used up in cellular respiration. Many anabolic processes are powered by the cleavage of adenosine triphosphate (ATP). Anabolism usually involves reduction and decreases entropy, making it unfavorable without energy input. The starting materials, called the precursor molecules, are joined using the chemical energy made available from hydrolyzing ATP, reducing the cofactors NAD+, NADP+, and FAD, or performing other favorable side reactions. Occasionally it can also be driven by entropy without energy input, in cases like the formation of the phospholipid bilayer of a cell, where hydrophobic interactions aggregate the molecules.
Cofactors
The reducing agents NADH, NADPH, and FADH2, as well as metal ions, act as cofactors at various steps in anabolic pathways. NADH, NADPH, and FADH2 act as electron carriers, while charged metal ions within enzymes stabilize charged functional groups on substrates.
Substrates
Substrates for anabolism are mostly intermediates taken from catabolic pathways during periods of high energy charge in the cell.
Functions
Anabolic processes build organs and tissues. These processes produce growth and differentiation of cells and increase in body size, a process that involves synthesis of complex molecules. Examples of anabolic processes include the growth and mineralization of bone and increases in muscle mass.
Anabolic hormones
Endocrinologists have traditionally classified hormones as anabolic or catabolic, depending on which part of metabolism they stimulate. The classic anabolic hormones are the anabolic steroids, which stimulate protein synthesis and muscle growth, and insulin.
Photosynthetic carbohydrate synthesis
Photosynthetic carbohydrate synthesis in plants and certain bacteria is an anabolic process that produces glucose, cellulose, starch, lipids, and proteins from CO2. It uses the energy produced from the light-driven reactions of photosynthesis, and creates the precursors to these large molecules via carbon assimilation in the photosynthetic carbon reduction cycle, a.k.a. the Calvin cycle.
Amino acid biosynthesis
All amino acids are formed from intermediates in the catabolic processes of glycolysis, the citric acid cycle, or the pentose phosphate pathway. From glycolysis, glucose 6-phosphate is a precursor for histidine; 3-phosphoglycerate is a precursor for glycine and cysteine; phosphoenol pyruvate, combined with the 3-phosphoglycerate-derivative erythrose 4-phosphate, forms tryptophan, phenylalanine, and tyrosine; and pyruvate is a precursor for alanine, valine, leucine, and isoleucine. From the citric acid cycle, α-ketoglutarate is converted into glutamate and subsequently glutamine, proline, and arginine; and oxaloacetate is converted into aspartate and subsequently asparagine, methionine, threonine, and lysine.
Glycogen storage
During periods of high blood sugar, glucose 6-phosphate from glycolysis is diverted to the glycogen-storing pathway. It is changed to glucose-1-phosphate by phosphoglucomutase and then to UDP-glucose by UTP--glucose-1-phosphate uridylyltransferase. Glycogen synthase adds this UDP-glucose to a glycogen chain.
Gluconeogenesis
Glucagon is traditionally a catabolic hormone, but also stimulates the anabolic process of gluconeogenesis by the liver, and to a lesser extent the kidney cortex and intestines, during starvation to prevent low blood sugar. It is the process of converting pyruvate into glucose. Pyruvate can come from the breakdown of glucose, lactate, amino acids, or glycerol. The gluconeogenesis pathway has many reversible enzymatic processes in common with glycolysis, but it is not the process of glycolysis in reverse. It uses different irreversible enzymes to ensure the overall pathway runs in one direction only.
Regulation
Anabolism operates with separate enzymes from catalysis, which undergo irreversible steps at some point in their pathways. This allows the cell to regulate the rate of production and prevent an infinite loop, also known as a futile cycle, from forming with catabolism.
The balance between anabolism and catabolism is sensitive to ADP and ATP, otherwise known as the energy charge of the cell. High amounts of ATP cause cells to favor the anabolic pathway and slow catabolic activity, while excess ADP slows anabolism and favors catabolism. These pathways are also regulated by circadian rhythms, with processes such as glycolysis fluctuating to match an animal's normal periods of activity throughout the day.
Etymology
The word anabolism is from Neo-Latin, with roots from , "upward" and , "to throw".
References
Metabolism | 0.808286 | 0.99432 | 0.803694 |
Metabolism | Metabolism (, from metabolē, "change") is the set of life-sustaining chemical reactions in organisms. The three main functions of metabolism are: the conversion of the energy in food to energy available to run cellular processes; the conversion of food to building blocks of proteins, lipids, nucleic acids, and some carbohydrates; and the elimination of metabolic wastes. These enzyme-catalyzed reactions allow organisms to grow and reproduce, maintain their structures, and respond to their environments. The word metabolism can also refer to the sum of all chemical reactions that occur in living organisms, including digestion and the transportation of substances into and between different cells, in which case the above described set of reactions within the cells is called intermediary (or intermediate) metabolism.
Metabolic reactions may be categorized as catabolic—the breaking down of compounds (for example, of glucose to pyruvate by cellular respiration); or anabolic—the building up (synthesis) of compounds (such as proteins, carbohydrates, lipids, and nucleic acids). Usually, catabolism releases energy, and anabolism consumes energy.
The chemical reactions of metabolism are organized into metabolic pathways, in which one chemical is transformed through a series of steps into another chemical, each step being facilitated by a specific enzyme. Enzymes are crucial to metabolism because they allow organisms to drive desirable reactions that require energy and will not occur by themselves, by coupling them to spontaneous reactions that release energy. Enzymes act as catalysts—they allow a reaction to proceed more rapidly—and they also allow the regulation of the rate of a metabolic reaction, for example in response to changes in the cell's environment or to signals from other cells.
The metabolic system of a particular organism determines which substances it will find nutritious and which poisonous. For example, some prokaryotes use hydrogen sulfide as a nutrient, yet this gas is poisonous to animals. The basal metabolic rate of an organism is the measure of the amount of energy consumed by all of these chemical reactions.
A striking feature of metabolism is the similarity of the basic metabolic pathways among vastly different species. For example, the set of carboxylic acids that are best known as the intermediates in the citric acid cycle are present in all known organisms, being found in species as diverse as the unicellular bacterium Escherichia coli and huge multicellular organisms like elephants. These similarities in metabolic pathways are likely due to their early appearance in evolutionary history, and their retention is likely due to their efficacy. In various diseases, such as type II diabetes, metabolic syndrome, and cancer, normal metabolism is disrupted. The metabolism of cancer cells is also different from the metabolism of normal cells, and these differences can be used to find targets for therapeutic intervention in cancer.
Key biochemicals
Most of the structures that make up animals, plants and microbes are made from four basic classes of molecules: amino acids, carbohydrates, nucleic acid and lipids (often called fats). As these molecules are vital for life, metabolic reactions either focus on making these molecules during the construction of cells and tissues, or on breaking them down and using them to obtain energy, by their digestion. These biochemicals can be joined to make polymers such as DNA and proteins, essential macromolecules of life.
Amino acids and proteins
Proteins are made of amino acids arranged in a linear chain joined by peptide bonds. Many proteins are enzymes that catalyze the chemical reactions in metabolism. Other proteins have structural or mechanical functions, such as those that form the cytoskeleton, a system of scaffolding that maintains the cell shape. Proteins are also important in cell signaling, immune responses, cell adhesion, active transport across membranes, and the cell cycle. Amino acids also contribute to cellular energy metabolism by providing a carbon source for entry into the citric acid cycle (tricarboxylic acid cycle), especially when a primary source of energy, such as glucose, is scarce, or when cells undergo metabolic stress.
Lipids
Lipids are the most diverse group of biochemicals. Their main structural uses are as part of internal and external biological membranes, such as the cell membrane. Their chemical energy can also be used. Lipids contain a long, non-polar hydrocarbon chain with a small polar region containing oxygen. Lipids are usually defined as hydrophobic or amphipathic biological molecules but will dissolve in organic solvents such as ethanol, benzene or chloroform. The fats are a large group of compounds that contain fatty acids and glycerol; a glycerol molecule attached to three fatty acids by ester linkages is called a triacylglyceride. Several variations of the basic structure exist, including backbones such as sphingosine in sphingomyelin, and hydrophilic groups such as phosphate in phospholipids. Steroids such as sterol are another major class of lipids.
Carbohydrates
Carbohydrates are aldehydes or ketones, with many hydroxyl groups attached, that can exist as straight chains or rings. Carbohydrates are the most abundant biological molecules, and fill numerous roles, such as the storage and transport of energy (starch, glycogen) and structural components (cellulose in plants, chitin in animals). The basic carbohydrate units are called monosaccharides and include galactose, fructose, and most importantly glucose. Monosaccharides can be linked together to form polysaccharides in almost limitless ways.
Nucleotides
The two nucleic acids, DNA and RNA, are polymers of nucleotides. Each nucleotide is composed of a phosphate attached to a ribose or deoxyribose sugar group which is attached to a nitrogenous base. Nucleic acids are critical for the storage and use of genetic information, and its interpretation through the processes of transcription and protein biosynthesis. This information is protected by DNA repair mechanisms and propagated through DNA replication. Many viruses have an RNA genome, such as HIV, which uses reverse transcription to create a DNA template from its viral RNA genome. RNA in ribozymes such as spliceosomes and ribosomes is similar to enzymes as it can catalyze chemical reactions. Individual nucleosides are made by attaching a nucleobase to a ribose sugar. These bases are heterocyclic rings containing nitrogen, classified as purines or pyrimidines. Nucleotides also act as coenzymes in metabolic-group-transfer reactions.
Coenzymes
Metabolism involves a vast array of chemical reactions, but most fall under a few basic types of reactions that involve the transfer of functional groups of atoms and their bonds within molecules. This common chemistry allows cells to use a small set of metabolic intermediates to carry chemical groups between different reactions. These group-transfer intermediates are called coenzymes. Each class of group-transfer reactions is carried out by a particular coenzyme, which is the substrate for a set of enzymes that produce it, and a set of enzymes that consume it. These coenzymes are therefore continuously made, consumed and then recycled.
One central coenzyme is adenosine triphosphate (ATP), the energy currency of cells. This nucleotide is used to transfer chemical energy between different chemical reactions. There is only a small amount of ATP in cells, but as it is continuously regenerated, the human body can use about its own weight in ATP per day. ATP acts as a bridge between catabolism and anabolism. Catabolism breaks down molecules, and anabolism puts them together. Catabolic reactions generate ATP, and anabolic reactions consume it. It also serves as a carrier of phosphate groups in phosphorylation reactions.
A vitamin is an organic compound needed in small quantities that cannot be made in cells. In human nutrition, most vitamins function as coenzymes after modification; for example, all water-soluble vitamins are phosphorylated or are coupled to nucleotides when they are used in cells. Nicotinamide adenine dinucleotide (NAD+), a derivative of vitamin B3 (niacin), is an important coenzyme that acts as a hydrogen acceptor. Hundreds of separate types of dehydrogenases remove electrons from their substrates and reduce NAD+ into NADH. This reduced form of the coenzyme is then a substrate for any of the reductases in the cell that need to transfer hydrogen atoms to their substrates. Nicotinamide adenine dinucleotide exists in two related forms in the cell, NADH and NADPH. The NAD+/NADH form is more important in catabolic reactions, while NADP+/NADPH is used in anabolic reactions.
Mineral and cofactors
Inorganic elements play critical roles in metabolism; some are abundant (e.g. sodium and potassium) while others function at minute concentrations. About 99% of a human's body weight is made up of the elements carbon, nitrogen, calcium, sodium, chlorine, potassium, hydrogen, phosphorus, oxygen and sulfur. Organic compounds (proteins, lipids and carbohydrates) contain the majority of the carbon and nitrogen; most of the oxygen and hydrogen is present as water.
The abundant inorganic elements act as electrolytes. The most important ions are sodium, potassium, calcium, magnesium, chloride, phosphate and the organic ion bicarbonate. The maintenance of precise ion gradients across cell membranes maintains osmotic pressure and pH. Ions are also critical for nerve and muscle function, as action potentials in these tissues are produced by the exchange of electrolytes between the extracellular fluid and the cell's fluid, the cytosol. Electrolytes enter and leave cells through proteins in the cell membrane called ion channels. For example, muscle contraction depends upon the movement of calcium, sodium and potassium through ion channels in the cell membrane and T-tubules.
Transition metals are usually present as trace elements in organisms, with zinc and iron being most abundant of those. Metal cofactors are bound tightly to specific sites in proteins; although enzyme cofactors can be modified during catalysis, they always return to their original state by the end of the reaction catalyzed. Metal micronutrients are taken up into organisms by specific transporters and bind to storage proteins such as ferritin or metallothionein when not in use.
Catabolism
Catabolism is the set of metabolic processes that break down large molecules. These include breaking down and oxidizing food molecules. The purpose of the catabolic reactions is to provide the energy and components needed by anabolic reactions which build molecules. The exact nature of these catabolic reactions differ from organism to organism, and organisms can be classified based on their sources of energy, hydrogen, and carbon (their primary nutritional groups), as shown in the table below. Organic molecules are used as a source of hydrogen atoms or electrons by organotrophs, while lithotrophs use inorganic substrates. Whereas phototrophs convert sunlight to chemical energy, chemotrophs depend on redox reactions that involve the transfer of electrons from reduced donor molecules such as organic molecules, hydrogen, hydrogen sulfide or ferrous ions to oxygen, nitrate or sulfate. In animals, these reactions involve complex organic molecules that are broken down to simpler molecules, such as carbon dioxide and water. Photosynthetic organisms, such as plants and cyanobacteria, use similar electron-transfer reactions to store energy absorbed from sunlight.
The most common set of catabolic reactions in animals can be separated into three main stages. In the first stage, large organic molecules, such as proteins, polysaccharides or lipids, are digested into their smaller components outside cells. Next, these smaller molecules are taken up by cells and converted to smaller molecules, usually acetyl coenzyme A (acetyl-CoA), which releases some energy. Finally, the acetyl group on acetyl-CoA is oxidized to water and carbon dioxide in the citric acid cycle and electron transport chain, releasing more energy while reducing the coenzyme nicotinamide adenine dinucleotide (NAD+) into NADH.
Digestion
Macromolecules cannot be directly processed by cells. Macromolecules must be broken into smaller units before they can be used in cell metabolism. Different classes of enzymes are used to digest these polymers. These digestive enzymes include proteases that digest proteins into amino acids, as well as glycoside hydrolases that digest polysaccharides into simple sugars known as monosaccharides.
Microbes simply secrete digestive enzymes into their surroundings, while animals only secrete these enzymes from specialized cells in their guts, including the stomach and pancreas, and in salivary glands. The amino acids or sugars released by these extracellular enzymes are then pumped into cells by active transport proteins.
Energy from organic compounds
Carbohydrate catabolism is the breakdown of carbohydrates into smaller units. Carbohydrates are usually taken into cells after they have been digested into monosaccharides such as glucose and fructose. Once inside, the major route of breakdown is glycolysis, in which glucose is converted into pyruvate. This process generates the energy-conveying molecule NADH from NAD+, and generates ATP from ADP for use in powering many processes within the cell. Pyruvate is an intermediate in several metabolic pathways, but the majority is converted to acetyl-CoA and fed into the citric acid cycle, which enables more ATP production by means of oxidative phosphorylation. This oxidation consumes molecular oxygen and releases water and the waste product carbon dioxide. When oxygen is lacking, or when pyruvate is temporarily produced faster than it can be consumed by the citric acid cycle (as in intense muscular exertion), pyruvate is converted to lactate by the enzyme lactate dehydrogenase, a process that also oxidizes NADH back to NAD+ for re-use in further glycolysis, allowing energy production to continue. The lactate is later converted back to pyruvate for ATP production where energy is needed, or back to glucose in the Cori cycle. An alternative route for glucose breakdown is the pentose phosphate pathway, which produces less energy but supports anabolism (biomolecule synthesis). This pathway reduces the coenzyme NADP+ to NADPH and produces pentose compounds such as ribose 5-phosphate for synthesis of many biomolecules such as nucleotides and aromatic amino acids.
Fats are catabolized by hydrolysis to free fatty acids and glycerol. The glycerol enters glycolysis and the fatty acids are broken down by beta oxidation to release acetyl-CoA, which then is fed into the citric acid cycle. Fatty acids release more energy upon oxidation than carbohydrates. Steroids are also broken down by some bacteria in a process similar to beta oxidation, and this breakdown process involves the release of significant amounts of acetyl-CoA, propionyl-CoA, and pyruvate, which can all be used by the cell for energy. M. tuberculosis can also grow on the lipid cholesterol as a sole source of carbon, and genes involved in the cholesterol-use pathway(s) have been validated as important during various stages of the infection lifecycle of M. tuberculosis.
Amino acids are either used to synthesize proteins and other biomolecules, or oxidized to urea and carbon dioxide to produce energy. The oxidation pathway starts with the removal of the amino group by a transaminase. The amino group is fed into the urea cycle, leaving a deaminated carbon skeleton in the form of a keto acid. Several of these keto acids are intermediates in the citric acid cycle, for example α-ketoglutarate formed by deamination of glutamate. The glucogenic amino acids can also be converted into glucose, through gluconeogenesis.
Energy transformations
Oxidative phosphorylation
In oxidative phosphorylation, the electrons removed from organic molecules in areas such as the citric acid cycle are transferred to oxygen and the energy released is used to make ATP. This is done in eukaryotes by a series of proteins in the membranes of mitochondria called the electron transport chain. In prokaryotes, these proteins are found in the cell's inner membrane. These proteins use the energy from reduced molecules like NADH to pump protons across a membrane.
Pumping protons out of the mitochondria creates a proton concentration difference across the membrane and generates an electrochemical gradient. This force drives protons back into the mitochondrion through the base of an enzyme called ATP synthase. The flow of protons makes the stalk subunit rotate, causing the active site of the synthase domain to change shape and phosphorylate adenosine diphosphate—turning it into ATP.
Energy from inorganic compounds
Chemolithotrophy is a type of metabolism found in prokaryotes where energy is obtained from the oxidation of inorganic compounds. These organisms can use hydrogen, reduced sulfur compounds (such as sulfide, hydrogen sulfide and thiosulfate), ferrous iron (Fe(II)) or ammonia as sources of reducing power and they gain energy from the oxidation of these compounds. These microbial processes are important in global biogeochemical cycles such as acetogenesis, nitrification and denitrification and are critical for soil fertility.
Energy from light
The energy in sunlight is captured by plants, cyanobacteria, purple bacteria, green sulfur bacteria and some protists. This process is often coupled to the conversion of carbon dioxide into organic compounds, as part of photosynthesis, which is discussed below. The energy capture and carbon fixation systems can, however, operate separately in prokaryotes, as purple bacteria and green sulfur bacteria can use sunlight as a source of energy, while switching between carbon fixation and the fermentation of organic compounds.
In many organisms, the capture of solar energy is similar in principle to oxidative phosphorylation, as it involves the storage of energy as a proton concentration gradient. This proton motive force then drives ATP synthesis. The electrons needed to drive this electron transport chain come from light-gathering proteins called photosynthetic reaction centres. Reaction centers are classified into two types depending on the nature of photosynthetic pigment present, with most photosynthetic bacteria only having one type, while plants and cyanobacteria have two.
In plants, algae, and cyanobacteria, photosystem II uses light energy to remove electrons from water, releasing oxygen as a waste product. The electrons then flow to the cytochrome b6f complex, which uses their energy to pump protons across the thylakoid membrane in the chloroplast. These protons move back through the membrane as they drive the ATP synthase, as before. The electrons then flow through photosystem I and can then be used to reduce the coenzyme NADP+.
Anabolism
Anabolism is the set of constructive metabolic processes where the energy released by catabolism is used to synthesize complex molecules. In general, the complex molecules that make up cellular structures are constructed step-by-step from smaller and simpler precursors. Anabolism involves three basic stages. First, the production of precursors such as amino acids, monosaccharides, isoprenoids and nucleotides, secondly, their activation into reactive forms using energy from ATP, and thirdly, the assembly of these precursors into complex molecules such as proteins, polysaccharides, lipids and nucleic acids.
Anabolism in organisms can be different according to the source of constructed molecules in their cells. Autotrophs such as plants can construct the complex organic molecules in their cells such as polysaccharides and proteins from simple molecules like carbon dioxide and water. Heterotrophs, on the other hand, require a source of more complex substances, such as monosaccharides and amino acids, to produce these complex molecules. Organisms can be further classified by ultimate source of their energy: photoautotrophs and photoheterotrophs obtain energy from light, whereas chemoautotrophs and chemoheterotrophs obtain energy from oxidation reactions.
Carbon fixation
Photosynthesis is the synthesis of carbohydrates from sunlight and carbon dioxide (CO2). In plants, cyanobacteria and algae, oxygenic photosynthesis splits water, with oxygen produced as a waste product. This process uses the ATP and NADPH produced by the photosynthetic reaction centres, as described above, to convert CO2 into glycerate 3-phosphate, which can then be converted into glucose. This carbon-fixation reaction is carried out by the enzyme RuBisCO as part of the Calvin–Benson cycle. Three types of photosynthesis occur in plants, C3 carbon fixation, C4 carbon fixation and CAM photosynthesis. These differ by the route that carbon dioxide takes to the Calvin cycle, with C3 plants fixing CO2 directly, while C4 and CAM photosynthesis incorporate the CO2 into other compounds first, as adaptations to deal with intense sunlight and dry conditions.
In photosynthetic prokaryotes the mechanisms of carbon fixation are more diverse. Here, carbon dioxide can be fixed by the Calvin–Benson cycle, a reversed citric acid cycle, or the carboxylation of acetyl-CoA. Prokaryotic chemoautotrophs also fix CO2 through the Calvin–Benson cycle, but use energy from inorganic compounds to drive the reaction.
Carbohydrates and glycans
In carbohydrate anabolism, simple organic acids can be converted into monosaccharides such as glucose and then used to assemble polysaccharides such as starch. The generation of glucose from compounds like pyruvate, lactate, glycerol, glycerate 3-phosphate and amino acids is called gluconeogenesis. Gluconeogenesis converts pyruvate to glucose-6-phosphate through a series of intermediates, many of which are shared with glycolysis. However, this pathway is not simply glycolysis run in reverse, as several steps are catalyzed by non-glycolytic enzymes. This is important as it allows the formation and breakdown of glucose to be regulated separately, and prevents both pathways from running simultaneously in a futile cycle.
Although fat is a common way of storing energy, in vertebrates such as humans the fatty acids in these stores cannot be converted to glucose through gluconeogenesis as these organisms cannot convert acetyl-CoA into pyruvate; plants do, but animals do not, have the necessary enzymatic machinery. As a result, after long-term starvation, vertebrates need to produce ketone bodies from fatty acids to replace glucose in tissues such as the brain that cannot metabolize fatty acids. In other organisms such as plants and bacteria, this metabolic problem is solved using the glyoxylate cycle, which bypasses the decarboxylation step in the citric acid cycle and allows the transformation of acetyl-CoA to oxaloacetate, where it can be used for the production of glucose. Other than fat, glucose is stored in most tissues, as an energy resource available within the tissue through glycogenesis which was usually being used to maintained glucose level in blood.
Polysaccharides and glycans are made by the sequential addition of monosaccharides by glycosyltransferase from a reactive sugar-phosphate donor such as uridine diphosphate glucose (UDP-Glc) to an acceptor hydroxyl group on the growing polysaccharide. As any of the hydroxyl groups on the ring of the substrate can be acceptors, the polysaccharides produced can have straight or branched structures. The polysaccharides produced can have structural or metabolic functions themselves, or be transferred to lipids and proteins by the enzymes oligosaccharyltransferases.
Fatty acids, isoprenoids and sterol
Fatty acids are made by fatty acid synthases that polymerize and then reduce acetyl-CoA units. The acyl chains in the fatty acids are extended by a cycle of reactions that add the acyl group, reduce it to an alcohol, dehydrate it to an alkene group and then reduce it again to an alkane group. The enzymes of fatty acid biosynthesis are divided into two groups: in animals and fungi, all these fatty acid synthase reactions are carried out by a single multifunctional type I protein, while in plant plastids and bacteria separate type II enzymes perform each step in the pathway.
Terpenes and isoprenoids are a large class of lipids that include the carotenoids and form the largest class of plant natural products. These compounds are made by the assembly and modification of isoprene units donated from the reactive precursors isopentenyl pyrophosphate and dimethylallyl pyrophosphate. These precursors can be made in different ways. In animals and archaea, the mevalonate pathway produces these compounds from acetyl-CoA, while in plants and bacteria the non-mevalonate pathway uses pyruvate and glyceraldehyde 3-phosphate as substrates. One important reaction that uses these activated isoprene donors is sterol biosynthesis. Here, the isoprene units are joined to make squalene and then folded up and formed into a set of rings to make lanosterol. Lanosterol can then be converted into other sterols such as cholesterol and ergosterol.
Proteins
Organisms vary in their ability to synthesize the 20 common amino acids. Most bacteria and plants can synthesize all twenty, but mammals can only synthesize eleven nonessential amino acids, so nine essential amino acids must be obtained from food. Some simple parasites, such as the bacteria Mycoplasma pneumoniae, lack all amino acid synthesis and take their amino acids directly from their hosts. All amino acids are synthesized from intermediates in glycolysis, the citric acid cycle, or the pentose phosphate pathway. Nitrogen is provided by glutamate and glutamine. Nonessensial amino acid synthesis depends on the formation of the appropriate alpha-keto acid, which is then transaminated to form an amino acid.
Amino acids are made into proteins by being joined in a chain of peptide bonds. Each different protein has a unique sequence of amino acid residues: this is its primary structure. Just as the letters of the alphabet can be combined to form an almost endless variety of words, amino acids can be linked in varying sequences to form a huge variety of proteins. Proteins are made from amino acids that have been activated by attachment to a transfer RNA molecule through an ester bond. This aminoacyl-tRNA precursor is produced in an ATP-dependent reaction carried out by an aminoacyl tRNA synthetase. This aminoacyl-tRNA is then a substrate for the ribosome, which joins the amino acid onto the elongating protein chain, using the sequence information in a messenger RNA.
Nucleotide synthesis and salvage
Nucleotides are made from amino acids, carbon dioxide and formic acid in pathways that require large amounts of metabolic energy. Consequently, most organisms have efficient systems to salvage preformed nucleotides. Purines are synthesized as nucleosides (bases attached to ribose). Both adenine and guanine are made from the precursor nucleoside inosine monophosphate, which is synthesized using atoms from the amino acids glycine, glutamine, and aspartic acid, as well as formate transferred from the coenzyme tetrahydrofolate. Pyrimidines, on the other hand, are synthesized from the base orotate, which is formed from glutamine and aspartate.
Xenobiotics and redox metabolism
All organisms are constantly exposed to compounds that they cannot use as foods and that would be harmful if they accumulated in cells, as they have no metabolic function. These potentially damaging compounds are called xenobiotics. Xenobiotics such as synthetic drugs, natural poisons and antibiotics are detoxified by a set of xenobiotic-metabolizing enzymes. In humans, these include cytochrome P450 oxidases, UDP-glucuronosyltransferases, and glutathione S-transferases. This system of enzymes acts in three stages to firstly oxidize the xenobiotic (phase I) and then conjugate water-soluble groups onto the molecule (phase II). The modified water-soluble xenobiotic can then be pumped out of cells and in multicellular organisms may be further metabolized before being excreted (phase III). In ecology, these reactions are particularly important in microbial biodegradation of pollutants and the bioremediation of contaminated land and oil spills. Many of these microbial reactions are shared with multicellular organisms, but due to the incredible diversity of types of microbes these organisms are able to deal with a far wider range of xenobiotics than multicellular organisms, and can degrade even persistent organic pollutants such as organochloride compounds.
A related problem for aerobic organisms is oxidative stress. Here, processes including oxidative phosphorylation and the formation of disulfide bonds during protein folding produce reactive oxygen species such as hydrogen peroxide. These damaging oxidants are removed by antioxidant metabolites such as glutathione and enzymes such as catalases and peroxidases.
Thermodynamics of living organisms
Living organisms must obey the laws of thermodynamics, which describe the transfer of heat and work. The second law of thermodynamics states that in any isolated system, the amount of entropy (disorder) cannot decrease. Although living organisms' amazing complexity appears to contradict this law, life is possible as all organisms are open systems that exchange matter and energy with their surroundings. Living systems are not in equilibrium, but instead are dissipative systems that maintain their state of high complexity by causing a larger increase in the entropy of their environments. The metabolism of a cell achieves this by coupling the spontaneous processes of catabolism to the non-spontaneous processes of anabolism. In thermodynamic terms, metabolism maintains order by creating disorder.
Regulation and control
As the environments of most organisms are constantly changing, the reactions of metabolism must be finely regulated to maintain a constant set of conditions within cells, a condition called homeostasis. Metabolic regulation also allows organisms to respond to signals and interact actively with their environments. Two closely linked concepts are important for understanding how metabolic pathways are controlled. Firstly, the regulation of an enzyme in a pathway is how its activity is increased and decreased in response to signals. Secondly, the control exerted by this enzyme is the effect that these changes in its activity have on the overall rate of the pathway (the flux through the pathway). For example, an enzyme may show large changes in activity (i.e. it is highly regulated) but if these changes have little effect on the flux of a metabolic pathway, then this enzyme is not involved in the control of the pathway.
There are multiple levels of metabolic regulation. In intrinsic regulation, the metabolic pathway self-regulates to respond to changes in the levels of substrates or products; for example, a decrease in the amount of product can increase the flux through the pathway to compensate. This type of regulation often involves allosteric regulation of the activities of multiple enzymes in the pathway. Extrinsic control involves a cell in a multicellular organism changing its metabolism in response to signals from other cells. These signals are usually in the form of water-soluble messengers such as hormones and growth factors and are detected by specific receptors on the cell surface. These signals are then transmitted inside the cell by second messenger systems that often involved the phosphorylation of proteins.
A very well understood example of extrinsic control is the regulation of glucose metabolism by the hormone insulin. Insulin is produced in response to rises in blood glucose levels. Binding of the hormone to insulin receptors on cells then activates a cascade of protein kinases that cause the cells to take up glucose and convert it into storage molecules such as fatty acids and glycogen. The metabolism of glycogen is controlled by activity of phosphorylase, the enzyme that breaks down glycogen, and glycogen synthase, the enzyme that makes it. These enzymes are regulated in a reciprocal fashion, with phosphorylation inhibiting glycogen synthase, but activating phosphorylase. Insulin causes glycogen synthesis by activating protein phosphatases and producing a decrease in the phosphorylation of these enzymes.
Evolution
The central pathways of metabolism described above, such as glycolysis and the citric acid cycle, are present in all three domains of living things and were present in the last universal common ancestor. This universal ancestral cell was prokaryotic and probably a methanogen that had extensive amino acid, nucleotide, carbohydrate and lipid metabolism. The retention of these ancient pathways during later evolution may be the result of these reactions having been an optimal solution to their particular metabolic problems, with pathways such as glycolysis and the citric acid cycle producing their end products highly efficiently and in a minimal number of steps. The first pathways of enzyme-based metabolism may have been parts of purine nucleotide metabolism, while previous metabolic pathways were a part of the ancient RNA world.
Many models have been proposed to describe the mechanisms by which novel metabolic pathways evolve. These include the sequential addition of novel enzymes to a short ancestral pathway, the duplication and then divergence of entire pathways as well as the recruitment of pre-existing enzymes and their assembly into a novel reaction pathway. The relative importance of these mechanisms is unclear, but genomic studies have shown that enzymes in a pathway are likely to have a shared ancestry, suggesting that many pathways have evolved in a step-by-step fashion with novel functions created from pre-existing steps in the pathway. An alternative model comes from studies that trace the evolution of proteins' structures in metabolic networks, this has suggested that enzymes are pervasively recruited, borrowing enzymes to perform similar functions in different metabolic pathways (evident in the MANET database) These recruitment processes result in an evolutionary enzymatic mosaic. A third possibility is that some parts of metabolism might exist as "modules" that can be reused in different pathways and perform similar functions on different molecules.
As well as the evolution of new metabolic pathways, evolution can also cause the loss of metabolic functions. For example, in some parasites metabolic processes that are not essential for survival are lost and preformed amino acids, nucleotides and carbohydrates may instead be scavenged from the host. Similar reduced metabolic capabilities are seen in endosymbiotic organisms.
Investigation and manipulation
Classically, metabolism is studied by a reductionist approach that focuses on a single metabolic pathway. Particularly valuable is the use of radioactive tracers at the whole-organism, tissue and cellular levels, which define the paths from precursors to final products by identifying radioactively labelled intermediates and products. The enzymes that catalyze these chemical reactions can then be purified and their kinetics and responses to inhibitors investigated. A parallel approach is to identify the small molecules in a cell or tissue; the complete set of these molecules is called the metabolome. Overall, these studies give a good view of the structure and function of simple metabolic pathways, but are inadequate when applied to more complex systems such as the metabolism of a complete cell.
An idea of the complexity of the metabolic networks in cells that contain thousands of different enzymes is given by the figure showing the interactions between just 43 proteins and 40 metabolites to the right: the sequences of genomes provide lists containing anything up to 26.500 genes. However, it is now possible to use this genomic data to reconstruct complete networks of biochemical reactions and produce more holistic mathematical models that may explain and predict their behavior. These models are especially powerful when used to integrate the pathway and metabolite data obtained through classical methods with data on gene expression from proteomic and DNA microarray studies. Using these techniques, a model of human metabolism has now been produced, which will guide future drug discovery and biochemical research. These models are now used in network analysis, to classify human diseases into groups that share common proteins or metabolites.
Bacterial metabolic networks are a striking example of bow-tie organization, an architecture able to input a wide range of nutrients and produce a large variety of products and complex macromolecules using a relatively few intermediate common currencies.
A major technological application of this information is metabolic engineering. Here, organisms such as yeast, plants or bacteria are genetically modified to make them more useful in biotechnology and aid the production of drugs such as antibiotics or industrial chemicals such as 1,3-propanediol and shikimic acid. These genetic modifications usually aim to reduce the amount of energy used to produce the product, increase yields and reduce the production of wastes.
History
The term metabolism is derived from the Ancient Greek word μεταβολή—"metabole" for "a change" which is derived from μεταβάλλειν—"metaballein", meaning "to change"
Greek philosophy
Aristotle's The Parts of Animals sets out enough details of his views on metabolism for an open flow model to be made. He believed that at each stage of the process, materials from food were transformed, with heat being released as the classical element of fire, and residual materials being excreted as urine, bile, or faeces.
Ibn al-Nafis described metabolism in his 1260 AD work titled Al-Risalah al-Kamiliyyah fil Siera al-Nabawiyyah (The Treatise of Kamil on the Prophet's Biography) which included the following phrase "Both the body and its parts are in a continuous state of dissolution and nourishment, so they are inevitably undergoing permanent change."
Application of the scientific method and Modern metabolic theories
The history of the scientific study of metabolism spans several centuries and has moved from examining whole animals in early studies, to examining individual metabolic reactions in modern biochemistry. The first controlled experiments in human metabolism were published by Santorio Santorio in 1614 in his book Ars de statica medicina. He described how he weighed himself before and after eating, sleep, working, sex, fasting, drinking, and excreting. He found that most of the food he took in was lost through what he called "insensible perspiration".
In these early studies, the mechanisms of these metabolic processes had not been identified and a vital force was thought to animate living tissue. In the 19th century, when studying the fermentation of sugar to alcohol by yeast, Louis Pasteur concluded that fermentation was catalyzed by substances within the yeast cells he called "ferments". He wrote that "alcoholic fermentation is an act correlated with the life and organization of the yeast cells, not with the death or putrefaction of the cells." This discovery, along with the publication by Friedrich Wöhler in 1828 of a paper on the chemical synthesis of urea, and is notable for being the first organic compound prepared from wholly inorganic precursors. This proved that the organic compounds and chemical reactions found in cells were no different in principle than any other part of chemistry.
It was the discovery of enzymes at the beginning of the 20th century by Eduard Buchner that separated the study of the chemical reactions of metabolism from the biological study of cells, and marked the beginnings of biochemistry. The mass of biochemical knowledge grew rapidly throughout the early 20th century. One of the most prolific of these modern biochemists was Hans Krebs who made huge contributions to the study of metabolism. He discovered the urea cycle and later, working with Hans Kornberg, the citric acid cycle and the glyoxylate cycle.
See also
, a "metabolism first" theory of the origin of life
Microphysiometry
Oncometabolism
References
Further reading
Introductory
Advanced
External links
General information
The Biochemistry of Metabolism (archived 8 March 2005)
Sparknotes SAT biochemistry Overview of biochemistry. School level.
MIT Biology Hypertextbook Undergraduate-level guide to molecular biology.
Human metabolism
Topics in Medical Biochemistry Guide to human metabolic pathways. School level.
THE Medical Biochemistry Page Comprehensive resource on human metabolism.
Databases
Flow Chart of Metabolic Pathways at ExPASy
IUBMB-Nicholson Metabolic Pathways Chart
SuperCYP: Database for Drug-Cytochrome-Metabolism
Metabolic pathways
Metabolism reference Pathway
Underwater diving physiology | 0.804199 | 0.999218 | 0.803569 |
Chemical property | A chemical property is any of a material's properties that becomes evident during, or after, a chemical reaction; that is, any attribute that can be established only by changing a substance's chemical identity. Simply speaking, chemical properties cannot be determined just by viewing or touching the substance; the substance's internal structure must be affected greatly for its chemical properties to be investigated. When a substance goes under a chemical reaction, the properties will change drastically, resulting in chemical change. However, a catalytic property would also be a chemical property.
Chemical properties can be contrasted with physical properties, which can be discerned without changing the substance's structure. However, for many properties within the scope of physical chemistry, and other disciplines at the boundary between chemistry and physics, the distinction may be a matter of researcher's perspective. Material properties, both physical and chemical, can be viewed as supervenient; i.e., secondary to the underlying reality. Several layers of superveniency are possible.
Chemical properties can be used for building chemical classifications. They can also be useful to identify an unknown substance or to separate or purify it from other substances. Materials science will normally consider the chemical properties of a substance to guide its applications.
Examples
Heat of combustion
Enthalpy of formation
Toxicity
Chemical stability in a given environment
Flammability (the ability to burn)
Preferred oxidation state(s)
Ability to corrode
Combustibility
Acidity and basicity
See also
Chemical structure
Material properties
Biological activity
Quantitative structure–activity relationship (QSAR)
Lipinski's Rule of Five, describing molecular properties of drugs
References | 0.807309 | 0.994949 | 0.803231 |
Biochemistry | Biochemistry or biological chemistry is the study of chemical processes within and relating to living organisms. A sub-discipline of both chemistry and biology, biochemistry may be divided into three fields: structural biology, enzymology, and metabolism. Over the last decades of the 20th century, biochemistry has become successful at explaining living processes through these three disciplines. Almost all areas of the life sciences are being uncovered and developed through biochemical methodology and research. Biochemistry focuses on understanding the chemical basis which allows biological molecules to give rise to the processes that occur within living cells and between cells, in turn relating greatly to the understanding of tissues and organs as well as organism structure and function. Biochemistry is closely related to molecular biology, the study of the molecular mechanisms of biological phenomena.
Much of biochemistry deals with the structures, functions, and interactions of biological macromolecules such as proteins, nucleic acids, carbohydrates, and lipids. They provide the structure of cells and perform many of the functions associated with life. The chemistry of the cell also depends upon the reactions of small molecules and ions. These can be inorganic (for example, water and metal ions) or organic (for example, the amino acids, which are used to synthesize proteins). The mechanisms used by cells to harness energy from their environment via chemical reactions are known as metabolism. The findings of biochemistry are applied primarily in medicine, nutrition and agriculture. In medicine, biochemists investigate the causes and cures of diseases. Nutrition studies how to maintain health and wellness and also the effects of nutritional deficiencies. In agriculture, biochemists investigate soil and fertilizers with the goal of improving crop cultivation, crop storage, and pest control. In recent decades, biochemical principles and methods have been combined with problem-solving approaches from engineering to manipulate living systems in order to produce useful tools for research, industrial processes, and diagnosis and control of diseasethe discipline of biotechnology.
History
At its most comprehensive definition, biochemistry can be seen as a study of the components and composition of living things and how they come together to become life. In this sense, the history of biochemistry may therefore go back as far as the ancient Greeks. However, biochemistry as a specific scientific discipline began sometime in the 19th century, or a little earlier, depending on which aspect of biochemistry is being focused on. Some argued that the beginning of biochemistry may have been the discovery of the first enzyme, diastase (now called amylase), in 1833 by Anselme Payen, while others considered Eduard Buchner's first demonstration of a complex biochemical process alcoholic fermentation in cell-free extracts in 1897 to be the birth of biochemistry. Some might also point as its beginning to the influential 1842 work by Justus von Liebig, Animal chemistry, or, Organic chemistry in its applications to physiology and pathology, which presented a chemical theory of metabolism, or even earlier to the 18th century studies on fermentation and respiration by Antoine Lavoisier. Many other pioneers in the field who helped to uncover the layers of complexity of biochemistry have been proclaimed founders of modern biochemistry. Emil Fischer, who studied the chemistry of proteins, and F. Gowland Hopkins, who studied enzymes and the dynamic nature of biochemistry, represent two examples of early biochemists.
The term "biochemistry" was first used when Vinzenz Kletzinsky (1826–1882) had his "Compendium der Biochemie" printed in Vienna in 1858; it derived from a combination of biology and chemistry. In 1877, Felix Hoppe-Seyler used the term ( in German) as a synonym for physiological chemistry in the foreword to the first issue of Zeitschrift für Physiologische Chemie (Journal of Physiological Chemistry) where he argued for the setting up of institutes dedicated to this field of study. The German chemist Carl Neuberg however is often cited to have coined the word in 1903, while some credited it to Franz Hofmeister.
It was once generally believed that life and its materials had some essential property or substance (often referred to as the "vital principle") distinct from any found in non-living matter, and it was thought that only living beings could produce the molecules of life. In 1828, Friedrich Wöhler published a paper on his serendipitous urea synthesis from potassium cyanate and ammonium sulfate; some regarded that as a direct overthrow of vitalism and the establishment of organic chemistry. However, the Wöhler synthesis has sparked controversy as some reject the death of vitalism at his hands. Since then, biochemistry has advanced, especially since the mid-20th century, with the development of new techniques such as chromatography, X-ray diffraction, dual polarisation interferometry, NMR spectroscopy, radioisotopic labeling, electron microscopy and molecular dynamics simulations. These techniques allowed for the discovery and detailed analysis of many molecules and metabolic pathways of the cell, such as glycolysis and the Krebs cycle (citric acid cycle), and led to an understanding of biochemistry on a molecular level.
Another significant historic event in biochemistry is the discovery of the gene, and its role in the transfer of information in the cell. In the 1950s, James D. Watson, Francis Crick, Rosalind Franklin and Maurice Wilkins were instrumental in solving DNA structure and suggesting its relationship with the genetic transfer of information. In 1958, George Beadle and Edward Tatum received the Nobel Prize for work in fungi showing that one gene produces one enzyme. In 1988, Colin Pitchfork was the first person convicted of murder with DNA evidence, which led to the growth of forensic science. More recently, Andrew Z. Fire and Craig C. Mello received the 2006 Nobel Prize for discovering the role of RNA interference (RNAi) in the silencing of gene expression.
Starting materials: the chemical elements of life
Around two dozen chemical elements are essential to various kinds of biological life. Most rare elements on Earth are not needed by life (exceptions being selenium and iodine), while a few common ones (aluminum and titanium) are not used. Most organisms share element needs, but there are a few differences between plants and animals. For example, ocean algae use bromine, but land plants and animals do not seem to need any. All animals require sodium, but is not an essential element for plants. Plants need boron and silicon, but animals may not (or may need ultra-small amounts).
Just six elements—carbon, hydrogen, nitrogen, oxygen, calcium and phosphorus—make up almost 99% of the mass of living cells, including those in the human body (see composition of the human body for a complete list). In addition to the six major elements that compose most of the human body, humans require smaller amounts of possibly 18 more.
Biomolecules
The 4 main classes of molecules in biochemistry (often called biomolecules) are carbohydrates, lipids, proteins, and nucleic acids. Many biological molecules are polymers: in this terminology, monomers are relatively small macromolecules that are linked together to create large macromolecules known as polymers. When monomers are linked together to synthesize a biological polymer, they undergo a process called dehydration synthesis. Different macromolecules can assemble in larger complexes, often needed for biological activity.
Carbohydrates
Two of the main functions of carbohydrates are energy storage and providing structure. One of the common sugars known as glucose is a carbohydrate, but not all carbohydrates are sugars. There are more carbohydrates on Earth than any other known type of biomolecule; they are used to store energy and genetic information, as well as play important roles in cell to cell interactions and communications.
The simplest type of carbohydrate is a monosaccharide, which among other properties contains carbon, hydrogen, and oxygen, mostly in a ratio of 1:2:1 (generalized formula CnH2nOn, where n is at least 3). Glucose (C6H12O6) is one of the most important carbohydrates; others include fructose (C6H12O6), the sugar commonly associated with the sweet taste of fruits, and deoxyribose (C5H10O4), a component of DNA. A monosaccharide can switch between acyclic (open-chain) form and a cyclic form. The open-chain form can be turned into a ring of carbon atoms bridged by an oxygen atom created from the carbonyl group of one end and the hydroxyl group of another. The cyclic molecule has a hemiacetal or hemiketal group, depending on whether the linear form was an aldose or a ketose.
In these cyclic forms, the ring usually has 5 or 6 atoms. These forms are called furanoses and pyranoses, respectively—by analogy with furan and pyran, the simplest compounds with the same carbon-oxygen ring (although they lack the carbon-carbon double bonds of these two molecules). For example, the aldohexose glucose may form a hemiacetal linkage between the hydroxyl on carbon 1 and the oxygen on carbon 4, yielding a molecule with a 5-membered ring, called glucofuranose. The same reaction can take place between carbons 1 and 5 to form a molecule with a 6-membered ring, called glucopyranose. Cyclic forms with a 7-atom ring called heptoses are rare.
Two monosaccharides can be joined by a glycosidic or ester bond into a disaccharide through a dehydration reaction during which a molecule of water is released. The reverse reaction in which the glycosidic bond of a disaccharide is broken into two monosaccharides is termed hydrolysis. The best-known disaccharide is sucrose or ordinary sugar, which consists of a glucose molecule and a fructose molecule joined. Another important disaccharide is lactose found in milk, consisting of a glucose molecule and a galactose molecule. Lactose may be hydrolysed by lactase, and deficiency in this enzyme results in lactose intolerance.
When a few (around three to six) monosaccharides are joined, it is called an oligosaccharide (oligo- meaning "few"). These molecules tend to be used as markers and signals, as well as having some other uses. Many monosaccharides joined form a polysaccharide. They can be joined in one long linear chain, or they may be branched. Two of the most common polysaccharides are cellulose and glycogen, both consisting of repeating glucose monomers. Cellulose is an important structural component of plant's cell walls and glycogen is used as a form of energy storage in animals.
Sugar can be characterized by having reducing or non-reducing ends. A reducing end of a carbohydrate is a carbon atom that can be in equilibrium with the open-chain aldehyde (aldose) or keto form (ketose). If the joining of monomers takes place at such a carbon atom, the free hydroxy group of the pyranose or furanose form is exchanged with an OH-side-chain of another sugar, yielding a full acetal. This prevents opening of the chain to the aldehyde or keto form and renders the modified residue non-reducing. Lactose contains a reducing end at its glucose moiety, whereas the galactose moiety forms a full acetal with the C4-OH group of glucose. Saccharose does not have a reducing end because of full acetal formation between the aldehyde carbon of glucose (C1) and the keto carbon of fructose (C2).
Lipids
Lipids comprise a diverse range of molecules and to some extent is a catchall for relatively water-insoluble or nonpolar compounds of biological origin, including waxes, fatty acids, fatty-acid derived phospholipids, sphingolipids, glycolipids, and terpenoids (e.g., retinoids and steroids). Some lipids are linear, open-chain aliphatic molecules, while others have ring structures. Some are aromatic (with a cyclic [ring] and planar [flat] structure) while others are not. Some are flexible, while others are rigid.
Lipids are usually made from one molecule of glycerol combined with other molecules. In triglycerides, the main group of bulk lipids, there is one molecule of glycerol and three fatty acids. Fatty acids are considered the monomer in that case, and maybe saturated (no double bonds in the carbon chain) or unsaturated (one or more double bonds in the carbon chain).
Most lipids have some polar character and are largely nonpolar. In general, the bulk of their structure is nonpolar or hydrophobic ("water-fearing"), meaning that it does not interact well with polar solvents like water. Another part of their structure is polar or hydrophilic ("water-loving") and will tend to associate with polar solvents like water. This makes them amphiphilic molecules (having both hydrophobic and hydrophilic portions). In the case of cholesterol, the polar group is a mere –OH (hydroxyl or alcohol).
In the case of phospholipids, the polar groups are considerably larger and more polar, as described below.
Lipids are an integral part of our daily diet. Most oils and milk products that we use for cooking and eating like butter, cheese, ghee etc. are composed of fats. Vegetable oils are rich in various polyunsaturated fatty acids (PUFA). Lipid-containing foods undergo digestion within the body and are broken into fatty acids and glycerol, the final degradation products of fats and lipids. Lipids, especially phospholipids, are also used in various pharmaceutical products, either as co-solubilizers (e.g. in parenteral infusions) or else as drug carrier components (e.g. in a liposome or transfersome).
Proteins
Proteins are very large molecules—macro-biopolymers—made from monomers called amino acids. An amino acid consists of an alpha carbon atom attached to an amino group, –NH2, a carboxylic acid group, –COOH (although these exist as –NH3+ and –COO− under physiologic conditions), a simple hydrogen atom, and a side chain commonly denoted as "–R". The side chain "R" is different for each amino acid of which there are 20 standard ones. It is this "R" group that makes each amino acid different, and the properties of the side chains greatly influence the overall three-dimensional conformation of a protein. Some amino acids have functions by themselves or in a modified form; for instance, glutamate functions as an important neurotransmitter. Amino acids can be joined via a peptide bond. In this dehydration synthesis, a water molecule is removed and the peptide bond connects the nitrogen of one amino acid's amino group to the carbon of the other's carboxylic acid group. The resulting molecule is called a dipeptide, and short stretches of amino acids (usually, fewer than thirty) are called peptides or polypeptides. Longer stretches merit the title proteins. As an example, the important blood serum protein albumin contains 585 amino acid residues.
Proteins can have structural and/or functional roles. For instance, movements of the proteins actin and myosin ultimately are responsible for the contraction of skeletal muscle. One property many proteins have is that they specifically bind to a certain molecule or class of molecules—they may be extremely selective in what they bind. Antibodies are an example of proteins that attach to one specific type of molecule. Antibodies are composed of heavy and light chains. Two heavy chains would be linked to two light chains through disulfide linkages between their amino acids. Antibodies are specific through variation based on differences in the N-terminal domain.
The enzyme-linked immunosorbent assay (ELISA), which uses antibodies, is one of the most sensitive tests modern medicine uses to detect various biomolecules. Probably the most important proteins, however, are the enzymes. Virtually every reaction in a living cell requires an enzyme to lower the activation energy of the reaction. These molecules recognize specific reactant molecules called substrates; they then catalyze the reaction between them. By lowering the activation energy, the enzyme speeds up that reaction by a rate of 1011 or more; a reaction that would normally take over 3,000 years to complete spontaneously might take less than a second with an enzyme. The enzyme itself is not used up in the process and is free to catalyze the same reaction with a new set of substrates. Using various modifiers, the activity of the enzyme can be regulated, enabling control of the biochemistry of the cell as a whole.
The structure of proteins is traditionally described in a hierarchy of four levels. The primary structure of a protein consists of its linear sequence of amino acids; for instance, "alanine-glycine-tryptophan-serine-glutamate-asparagine-glycine-lysine-...". Secondary structure is concerned with local morphology (morphology being the study of structure). Some combinations of amino acids will tend to curl up in a coil called an α-helix or into a sheet called a β-sheet; some α-helixes can be seen in the hemoglobin schematic above. Tertiary structure is the entire three-dimensional shape of the protein. This shape is determined by the sequence of amino acids. In fact, a single change can change the entire structure. The alpha chain of hemoglobin contains 146 amino acid residues; substitution of the glutamate residue at position 6 with a valine residue changes the behavior of hemoglobin so much that it results in sickle-cell disease. Finally, quaternary structure is concerned with the structure of a protein with multiple peptide subunits, like hemoglobin with its four subunits. Not all proteins have more than one subunit.
Ingested proteins are usually broken up into single amino acids or dipeptides in the small intestine and then absorbed. They can then be joined to form new proteins. Intermediate products of glycolysis, the citric acid cycle, and the pentose phosphate pathway can be used to form all twenty amino acids, and most bacteria and plants possess all the necessary enzymes to synthesize them. Humans and other mammals, however, can synthesize only half of them. They cannot synthesize isoleucine, leucine, lysine, methionine, phenylalanine, threonine, tryptophan, and valine. Because they must be ingested, these are the essential amino acids. Mammals do possess the enzymes to synthesize alanine, asparagine, aspartate, cysteine, glutamate, glutamine, glycine, proline, serine, and tyrosine, the nonessential amino acids. While they can synthesize arginine and histidine, they cannot produce it in sufficient amounts for young, growing animals, and so these are often considered essential amino acids.
If the amino group is removed from an amino acid, it leaves behind a carbon skeleton called an α-keto acid. Enzymes called transaminases can easily transfer the amino group from one amino acid (making it an α-keto acid) to another α-keto acid (making it an amino acid). This is important in the biosynthesis of amino acids, as for many of the pathways, intermediates from other biochemical pathways are converted to the α-keto acid skeleton, and then an amino group is added, often via transamination. The amino acids may then be linked together to form a protein.
A similar process is used to break down proteins. It is first hydrolyzed into its component amino acids. Free ammonia (NH3), existing as the ammonium ion (NH4+) in blood, is toxic to life forms. A suitable method for excreting it must therefore exist. Different tactics have evolved in different animals, depending on the animals' needs. Unicellular organisms release the ammonia into the environment. Likewise, bony fish can release ammonia into the water where it is quickly diluted. In general, mammals convert ammonia into urea, via the urea cycle.
In order to determine whether two proteins are related, or in other words to decide whether they are homologous or not, scientists use sequence-comparison methods. Methods like sequence alignments and structural alignments are powerful tools that help scientists identify homologies between related molecules. The relevance of finding homologies among proteins goes beyond forming an evolutionary pattern of protein families. By finding how similar two protein sequences are, we acquire knowledge about their structure and therefore their function.
Nucleic acids
Nucleic acids, so-called because of their prevalence in cellular nuclei, is the generic name of the family of biopolymers. They are complex, high-molecular-weight biochemical macromolecules that can convey genetic information in all living cells and viruses. The monomers are called nucleotides, and each consists of three components: a nitrogenous heterocyclic base (either a purine or a pyrimidine), a pentose sugar, and a phosphate group.
The most common nucleic acids are deoxyribonucleic acid (DNA) and ribonucleic acid (RNA). The phosphate group and the sugar of each nucleotide bond with each other to form the backbone of the nucleic acid, while the sequence of nitrogenous bases stores the information. The most common nitrogenous bases are adenine, cytosine, guanine, thymine, and uracil. The nitrogenous bases of each strand of a nucleic acid will form hydrogen bonds with certain other nitrogenous bases in a complementary strand of nucleic acid. Adenine binds with thymine and uracil, thymine binds only with adenine, and cytosine and guanine can bind only with one another. Adenine, thymine, and uracil contain two hydrogen bonds, while hydrogen bonds formed between cytosine and guanine are three.
Aside from the genetic material of the cell, nucleic acids often play a role as second messengers, as well as forming the base molecule for adenosine triphosphate (ATP), the primary energy-carrier molecule found in all living organisms. Also, the nitrogenous bases possible in the two nucleic acids are different: adenine, cytosine, and guanine occur in both RNA and DNA, while thymine occurs only in DNA and uracil occurs in RNA.
Metabolism
Carbohydrates as energy source
Glucose is an energy source in most life forms. For instance, polysaccharides are broken down into their monomers by enzymes (glycogen phosphorylase removes glucose residues from glycogen, a polysaccharide). Disaccharides like lactose or sucrose are cleaved into their two component monosaccharides.
Glycolysis (anaerobic)
Glucose is mainly metabolized by a very important ten-step pathway called glycolysis, the net result of which is to break down one molecule of glucose into two molecules of pyruvate. This also produces a net two molecules of ATP, the energy currency of cells, along with two reducing equivalents of converting NAD+ (nicotinamide adenine dinucleotide: oxidized form) to NADH (nicotinamide adenine dinucleotide: reduced form). This does not require oxygen; if no oxygen is available (or the cell cannot use oxygen), the NAD is restored by converting the pyruvate to lactate (lactic acid) (e.g. in humans) or to ethanol plus carbon dioxide (e.g. in yeast). Other monosaccharides like galactose and fructose can be converted into intermediates of the glycolytic pathway.
Aerobic
In aerobic cells with sufficient oxygen, as in most human cells, the pyruvate is further metabolized. It is irreversibly converted to acetyl-CoA, giving off one carbon atom as the waste product carbon dioxide, generating another reducing equivalent as NADH. The two molecules acetyl-CoA (from one molecule of glucose) then enter the citric acid cycle, producing two molecules of ATP, six more NADH molecules and two reduced (ubi)quinones (via FADH2 as enzyme-bound cofactor), and releasing the remaining carbon atoms as carbon dioxide. The produced NADH and quinol molecules then feed into the enzyme complexes of the respiratory chain, an electron transport system transferring the electrons ultimately to oxygen and conserving the released energy in the form of a proton gradient over a membrane (inner mitochondrial membrane in eukaryotes). Thus, oxygen is reduced to water and the original electron acceptors NAD+ and quinone are regenerated. This is why humans breathe in oxygen and breathe out carbon dioxide. The energy released from transferring the electrons from high-energy states in NADH and quinol is conserved first as proton gradient and converted to ATP via ATP synthase. This generates an additional 28 molecules of ATP (24 from the 8 NADH + 4 from the 2 quinols), totaling to 32 molecules of ATP conserved per degraded glucose (two from glycolysis + two from the citrate cycle). It is clear that using oxygen to completely oxidize glucose provides an organism with far more energy than any oxygen-independent metabolic feature, and this is thought to be the reason why complex life appeared only after Earth's atmosphere accumulated large amounts of oxygen.
Gluconeogenesis
In vertebrates, vigorously contracting skeletal muscles (during weightlifting or sprinting, for example) do not receive enough oxygen to meet the energy demand, and so they shift to anaerobic metabolism, converting glucose to lactate.
The combination of glucose from noncarbohydrates origin, such as fat and proteins. This only happens when glycogen supplies in the liver are worn out. The pathway is a crucial reversal of glycolysis from pyruvate to glucose and can use many sources like amino acids, glycerol and Krebs Cycle. Large scale protein and fat catabolism usually occur when those suffer from starvation or certain endocrine disorders. The liver regenerates the glucose, using a process called gluconeogenesis. This process is not quite the opposite of glycolysis, and actually requires three times the amount of energy gained from glycolysis (six molecules of ATP are used, compared to the two gained in glycolysis). Analogous to the above reactions, the glucose produced can then undergo glycolysis in tissues that need energy, be stored as glycogen (or starch in plants), or be converted to other monosaccharides or joined into di- or oligosaccharides. The combined pathways of glycolysis during exercise, lactate's crossing via the bloodstream to the liver, subsequent gluconeogenesis and release of glucose into the bloodstream is called the Cori cycle.
Relationship to other "molecular-scale" biological sciences
Researchers in biochemistry use specific techniques native to biochemistry, but increasingly combine these with techniques and ideas developed in the fields of genetics, molecular biology, and biophysics. There is not a defined line between these disciplines. Biochemistry studies the chemistry required for biological activity of molecules, molecular biology studies their biological activity, genetics studies their heredity, which happens to be carried by their genome. This is shown in the following schematic that depicts one possible view of the relationships between the fields:
Biochemistry is the study of the chemical substances and vital processes occurring in live organisms. Biochemists focus heavily on the role, function, and structure of biomolecules. The study of the chemistry behind biological processes and the synthesis of biologically active molecules are applications of biochemistry. Biochemistry studies life at the atomic and molecular level.
Genetics is the study of the effect of genetic differences in organisms. This can often be inferred by the absence of a normal component (e.g. one gene). The study of "mutants" – organisms that lack one or more functional components with respect to the so-called "wild type" or normal phenotype. Genetic interactions (epistasis) can often confound simple interpretations of such "knockout" studies.
Molecular biology is the study of molecular underpinnings of the biological phenomena, focusing on molecular synthesis, modification, mechanisms and interactions. The central dogma of molecular biology, where genetic material is transcribed into RNA and then translated into protein, despite being oversimplified, still provides a good starting point for understanding the field. This concept has been revised in light of emerging novel roles for RNA.
Chemical biology seeks to develop new tools based on small molecules that allow minimal perturbation of biological systems while providing detailed information about their function. Further, chemical biology employs biological systems to create non-natural hybrids between biomolecules and synthetic devices (for example emptied viral capsids that can deliver gene therapy or drug molecules).
See also
Lists
Important publications in biochemistry (chemistry)
List of biochemistry topics
List of biochemists
List of biomolecules
See also
Astrobiology
Biochemistry (journal)
Biological Chemistry (journal)
Biophysics
Chemical ecology
Computational biomodeling
Dedicated bio-based chemical
EC number
Hypothetical types of biochemistry
International Union of Biochemistry and Molecular Biology
Metabolome
Metabolomics
Molecular biology
Molecular medicine
Plant biochemistry
Proteolysis
Small molecule
Structural biology
TCA cycle
Notes
References
Cited literature
Further reading
Fruton, Joseph S. Proteins, Enzymes, Genes: The Interplay of Chemistry and Biology. Yale University Press: New Haven, 1999.
Keith Roberts, Martin Raff, Bruce Alberts, Peter Walter, Julian Lewis and Alexander Johnson, Molecular Biology of the Cell
4th Edition, Routledge, March, 2002, hardcover, 1616 pp.
3rd Edition, Garland, 1994,
2nd Edition, Garland, 1989,
Kohler, Robert. From Medical Chemistry to Biochemistry: The Making of a Biomedical Discipline. Cambridge University Press, 1982.
External links
The Virtual Library of Biochemistry, Molecular Biology and Cell Biology
Biochemistry, 5th ed. Full text of Berg, Tymoczko, and Stryer, courtesy of NCBI.
SystemsX.ch – The Swiss Initiative in Systems Biology
Full text of Biochemistry by Kevin and Indira, an introductory biochemistry textbook.
Biotechnology
Molecular biology | 0.802929 | 0.998529 | 0.801748 |
Biosynthesis | Biosynthesis, i.e., chemical synthesis occurring in biological contexts, is a term most often referring to multi-step, enzyme-catalyzed processes where chemical substances absorbed as nutrients (or previously converted through biosynthesis) serve as enzyme substrates, with conversion by the living organism either into simpler or more complex products. Examples of biosynthetic pathways include those for the production of amino acids, lipid membrane components, and nucleotides, but also for the production of all classes of biological macromolecules, and of acetyl-coenzyme A, adenosine triphosphate, nicotinamide adenine dinucleotide and other key intermediate and transactional molecules needed for metabolism. Thus, in biosynthesis, any of an array of compounds, from simple to complex, are converted into other compounds, and so it includes both the catabolism and anabolism (building up and breaking down) of complex molecules (including macromolecules). Biosynthetic processes are often represented via charts of metabolic pathways. A particular biosynthetic pathway may be located within a single cellular organelle (e.g., mitochondrial fatty acid synthesis pathways), while others involve enzymes that are located across an array of cellular organelles and structures (e.g., the biosynthesis of glycosylated cell surface proteins).
Elements of biosynthesis
Elements of biosynthesis include: precursor compounds, chemical energy (e.g. ATP), and catalytic enzymes which may need coenzymes (e.g. NADH, NADPH). These elements create monomers, the building blocks for macromolecules. Some important biological macromolecules include: proteins, which are composed of amino acid monomers joined via peptide bonds, and DNA molecules, which are composed of nucleotides joined via phosphodiester bonds.
Properties of chemical reactions
Biosynthesis occurs due to a series of chemical reactions. For these reactions to take place, the following elements are necessary:
Precursor compounds: these compounds are the starting molecules or substrates in a reaction. These may also be viewed as the reactants in a given chemical process.
Chemical energy: chemical energy can be found in the form of high energy molecules. These molecules are required for energetically unfavourable reactions. Furthermore, the hydrolysis of these compounds drives a reaction forward. High energy molecules, such as ATP, have three phosphates. Often, the terminal phosphate is split off during hydrolysis and transferred to another molecule.
Catalysts: these may be for example metal ions or coenzymes and they catalyze a reaction by increasing the rate of the reaction and lowering the activation energy.
In the simplest sense, the reactions that occur in biosynthesis have the following format:
Reactant ->[][enzyme] Product
Some variations of this basic equation which will be discussed later in more detail are:
Simple compounds which are converted into other compounds, usually as part of a multiple step reaction pathway. Two examples of this type of reaction occur during the formation of nucleic acids and the charging of tRNA prior to translation. For some of these steps, chemical energy is required:
{Precursor~molecule} + ATP <=> {product~AMP} + PP_i
Simple compounds that are converted into other compounds with the assistance of cofactors. For example, the synthesis of phospholipids requires acetyl CoA, while the synthesis of another membrane component, sphingolipids, requires NADH and FADH for the formation the sphingosine backbone. The general equation for these examples is:
{Precursor~molecule} + Cofactor ->[][enzyme] macromolecule
Simple compounds that join to create a macromolecule. For example, fatty acids join to form phospholipids. In turn, phospholipids and cholesterol interact noncovalently in order to form the lipid bilayer. This reaction may be depicted as follows:
{Molecule~1} + Molecule~2 -> macromolecule
Lipid
Many intricate macromolecules are synthesized in a pattern of simple, repeated structures. For example, the simplest structures of lipids are fatty acids. Fatty acids are hydrocarbon derivatives; they contain a carboxyl group "head" and a hydrocarbon chain "tail". These fatty acids create larger components, which in turn incorporate noncovalent interactions to form the lipid bilayer.
Fatty acid chains are found in two major components of membrane lipids: phospholipids and sphingolipids. A third major membrane component, cholesterol, does not contain these fatty acid units.
Eukaryotic phospholipids
The foundation of all biomembranes consists of a bilayer structure of phospholipids. The phospholipid molecule is amphipathic; it contains a hydrophilic polar head and a hydrophobic nonpolar tail. The phospholipid heads interact with each other and aqueous media, while the hydrocarbon tails orient themselves in the center, away from water. These latter interactions drive the bilayer structure that acts as a barrier for ions and molecules.
There are various types of phospholipids; consequently, their synthesis pathways differ. However, the first step in phospholipid synthesis involves the formation of phosphatidate or diacylglycerol 3-phosphate at the endoplasmic reticulum and outer mitochondrial membrane. The synthesis pathway is found below:
The pathway starts with glycerol 3-phosphate, which gets converted to lysophosphatidate via the addition of a fatty acid chain provided by acyl coenzyme A. Then, lysophosphatidate is converted to phosphatidate via the addition of another fatty acid chain contributed by a second acyl CoA; all of these steps are catalyzed by the glycerol phosphate acyltransferase enzyme. Phospholipid synthesis continues in the endoplasmic reticulum, and the biosynthesis pathway diverges depending on the components of the particular phospholipid.
Sphingolipids
Like phospholipids, these fatty acid derivatives have a polar head and nonpolar tails. Unlike phospholipids, sphingolipids have a sphingosine backbone. Sphingolipids exist in eukaryotic cells and are particularly abundant in the central nervous system. For example, sphingomyelin is part of the myelin sheath of nerve fibers.
Sphingolipids are formed from ceramides that consist of a fatty acid chain attached to the amino group of a sphingosine backbone. These ceramides are synthesized from the acylation of sphingosine. The biosynthetic pathway for sphingosine is found below:
As the image denotes, during sphingosine synthesis, palmitoyl CoA and serine undergo a condensation reaction which results in the formation of 3-dehydrosphinganine. This product is then reduced to form dihydrospingosine, which is converted to sphingosine via the oxidation reaction by FAD.
Cholesterol
This lipid belongs to a class of molecules called sterols. Sterols have four fused rings and a hydroxyl group. Cholesterol is a particularly important molecule. Not only does it serve as a component of lipid membranes, it is also a precursor to several steroid hormones, including cortisol, testosterone, and estrogen.
Cholesterol is synthesized from acetyl CoA. The pathway is shown below:
More generally, this synthesis occurs in three stages, with the first stage taking place in the cytoplasm and the second and third stages occurring in the endoplasmic reticulum. The stages are as follows:
1. The synthesis of isopentenyl pyrophosphate, the "building block" of cholesterol
2. The formation of squalene via the condensation of six molecules of isopentenyl phosphate
3. The conversion of squalene into cholesterol via several enzymatic reactions
Nucleotides
The biosynthesis of nucleotides involves enzyme-catalyzed reactions that convert substrates into more complex products. Nucleotides are the building blocks of DNA and RNA. Nucleotides are composed of a five-membered ring formed from ribose sugar in RNA, and deoxyribose sugar in DNA; these sugars are linked to a purine or pyrimidine base with a glycosidic bond and a phosphate group at the 5' location of the sugar.
Purine nucleotides
The DNA nucleotides adenosine and guanosine consist of a purine base attached to a ribose sugar with a glycosidic bond. In the case of RNA nucleotides deoxyadenosine and deoxyguanosine, the purine bases are attached to a deoxyribose sugar with a glycosidic bond. The purine bases on DNA and RNA nucleotides are synthesized in a twelve-step reaction mechanism present in most single-celled organisms. Higher eukaryotes employ a similar reaction mechanism in ten reaction steps. Purine bases are synthesized by converting phosphoribosyl pyrophosphate (PRPP) to inosine monophosphate (IMP), which is the first key intermediate in purine base biosynthesis. Further enzymatic modification of IMP produces the adenosine and guanosine bases of nucleotides.
The first step in purine biosynthesis is a condensation reaction, performed by glutamine-PRPP amidotransferase. This enzyme transfers the amino group from glutamine to PRPP, forming 5-phosphoribosylamine. The following step requires the activation of glycine by the addition of a phosphate group from ATP.
GAR synthetase performs the condensation of activated glycine onto PRPP, forming glycineamide ribonucleotide (GAR).
GAR transformylase adds a formyl group onto the amino group of GAR, forming formylglycinamide ribonucleotide (FGAR).
FGAR amidotransferase catalyzes the addition of a nitrogen group to FGAR, forming formylglycinamidine ribonucleotide (FGAM).
FGAM cyclase catalyzes ring closure, which involves removal of a water molecule, forming the 5-membered imidazole ring 5-aminoimidazole ribonucleotide (AIR).
N5-CAIR synthetase transfers a carboxyl group, forming the intermediate N5-carboxyaminoimidazole ribonucleotide (N5-CAIR).
N5-CAIR mutase rearranges the carboxyl functional group and transfers it onto the imidazole ring, forming carboxyamino- imidazole ribonucleotide (CAIR). The two step mechanism of CAIR formation from AIR is mostly found in single celled organisms. Higher eukaryotes contain the enzyme AIR carboxylase, which transfers a carboxyl group directly to AIR imidazole ring, forming CAIR.
SAICAR synthetase forms a peptide bond between aspartate and the added carboxyl group of the imidazole ring, forming N-succinyl-5-aminoimidazole-4-carboxamide ribonucleotide (SAICAR).
SAICAR lyase removes the carbon skeleton of the added aspartate, leaving the amino group and forming 5-aminoimidazole-4-carboxamide ribonucleotide (AICAR).
AICAR transformylase transfers a carbonyl group to AICAR, forming N-formylaminoimidazole- 4-carboxamide ribonucleotide (FAICAR).
The final step involves the enzyme IMP synthase, which performs the purine ring closure and forms the inosine monophosphate intermediate.
Pyrimidine nucleotides
Other DNA and RNA nucleotide bases that are linked to the ribose sugar via a glycosidic bond are thymine, cytosine and uracil (which is only found in RNA).
Uridine monophosphate biosynthesis involves an enzyme that is located in the mitochondrial inner membrane and multifunctional enzymes that are located in the cytosol.
The first step involves the enzyme carbamoyl phosphate synthase combining glutamine with CO2 in an ATP dependent reaction to form carbamoyl phosphate.
Aspartate carbamoyltransferase condenses carbamoyl phosphate with aspartate to form uridosuccinate.
Dihydroorotase performs ring closure, a reaction that loses water, to form dihydroorotate.
Dihydroorotate dehydrogenase, located within the mitochondrial inner membrane, oxidizes dihydroorotate to orotate.
Orotate phosphoribosyl hydrolase (OMP pyrophosphorylase) condenses orotate with PRPP to form orotidine-5'-phosphate.
OMP decarboxylase catalyzes the conversion of orotidine-5'-phosphate to UMP.
After the uridine nucleotide base is synthesized, the other bases, cytosine and thymine are synthesized. Cytosine biosynthesis is a two-step reaction which involves the conversion of UMP to UTP. Phosphate addition to UMP is catalyzed by a kinase enzyme. The enzyme CTP synthase catalyzes the next reaction step: the conversion of UTP to CTP by transferring an amino group from glutamine to uridine; this forms the cytosine base of CTP. The mechanism, which depicts the reaction UTP + ATP + glutamine ⇔ CTP + ADP + glutamate, is below:
Cytosine is a nucleotide that is present in both DNA and RNA. However, uracil is only found in RNA. Therefore, after UTP is synthesized, it is must be converted into a deoxy form to be incorporated into DNA. This conversion involves the enzyme ribonucleoside triphosphate reductase. This reaction that removes the 2'-OH of the ribose sugar to generate deoxyribose is not affected by the bases attached to the sugar. This non-specificity allows ribonucleoside triphosphate reductase to convert all nucleotide triphosphates to deoxyribonucleotide by a similar mechanism.
In contrast to uracil, thymine bases are found mostly in DNA, not RNA. Cells do not normally contain thymine bases that are linked to ribose sugars in RNA, thus indicating that cells only synthesize deoxyribose-linked thymine. The enzyme thymidylate synthetase is responsible for synthesizing thymine residues from dUMP to dTMP. This reaction transfers a methyl group onto the uracil base of dUMP to generate dTMP. The thymidylate synthase reaction, dUMP + 5,10-methylenetetrahydrofolate ⇔ dTMP + dihydrofolate, is shown to the right.
DNA
Although there are differences between eukaryotic and prokaryotic DNA synthesis, the following section denotes key characteristics of DNA replication shared by both organisms.
DNA is composed of nucleotides that are joined by phosphodiester bonds. DNA synthesis, which takes place in the nucleus, is a semiconservative process, which means that the resulting DNA molecule contains an original strand from the parent structure and a new strand. DNA synthesis is catalyzed by a family of DNA polymerases that require four deoxynucleoside triphosphates, a template strand, and a primer with a free 3'OH in which to incorporate nucleotides.
In order for DNA replication to occur, a replication fork is created by enzymes called helicases which unwind the DNA helix. Topoisomerases at the replication fork remove supercoils caused by DNA unwinding, and single-stranded DNA binding proteins maintain the two single-stranded DNA templates stabilized prior to replication.
DNA synthesis is initiated by the RNA polymerase primase, which makes an RNA primer with a free 3'OH. This primer is attached to the single-stranded DNA template, and DNA polymerase elongates the chain by incorporating nucleotides; DNA polymerase also proofreads the newly synthesized DNA strand.
During the polymerization reaction catalyzed by DNA polymerase, a nucleophilic attack occurs by the 3'OH of the growing chain on the innermost phosphorus atom of a deoxynucleoside triphosphate; this yields the formation of a phosphodiester bridge that attaches a new nucleotide and releases pyrophosphate.
Two types of strands are created simultaneously during replication: the leading strand, which is synthesized continuously and grows towards the replication fork, and the lagging strand, which is made discontinuously in Okazaki fragments and grows away from the replication fork. Okazaki fragments are covalently joined by DNA ligase to form a continuous strand.
Then, to complete DNA replication, RNA primers are removed, and the resulting gaps are replaced with DNA and joined via DNA ligase.
Amino acids
A protein is a polymer that is composed from amino acids that are linked by peptide bonds. There are more than 300 amino acids found in nature of which only twenty two, known as the proteinogenic amino acids, are the building blocks for protein. Only green plants and most microbes are able to synthesize all of the 20 standard amino acids that are needed by all living species. Mammals can only synthesize ten of the twenty standard amino acids. The other amino acids, valine, methionine, leucine, isoleucine, phenylalanine, lysine, threonine and tryptophan for adults and histidine, and arginine for babies are obtained through diet.
Amino acid basic structure
The general structure of the standard amino acids includes a primary amino group, a carboxyl group and the functional group attached to the α-carbon. The different amino acids are identified by the functional group. As a result of the three different groups attached to the α-carbon, amino acids are asymmetrical molecules. For all standard amino acids, except glycine, the α-carbon is a chiral center. In the case of glycine, the α-carbon has two hydrogen atoms, thus adding symmetry to this molecule. With the exception of proline, all of the amino acids found in life have the L-isoform conformation. Proline has a functional group on the α-carbon that forms a ring with the amino group.
Nitrogen source
One major step in amino acid biosynthesis involves incorporating a nitrogen group onto the α-carbon. In cells, there are two major pathways of incorporating nitrogen groups. One pathway involves the enzyme glutamine oxoglutarate aminotransferase (GOGAT) which removes the amide amino group of glutamine and transfers it onto 2-oxoglutarate, producing two glutamate molecules. In this catalysis reaction, glutamine serves as the nitrogen source. An image illustrating this reaction is found to the right.
The other pathway for incorporating nitrogen onto the α-carbon of amino acids involves the enzyme glutamate dehydrogenase (GDH). GDH is able to transfer ammonia onto 2-oxoglutarate and form glutamate. Furthermore, the enzyme glutamine synthetase (GS) is able to transfer ammonia onto glutamate and synthesize glutamine, replenishing glutamine.
The glutamate family of amino acids
The glutamate family of amino acids includes the amino acids that derive from the amino acid glutamate. This family includes: glutamate, glutamine, proline, and arginine. This family also includes the amino acid lysine, which is derived from α-ketoglutarate.
The biosynthesis of glutamate and glutamine is a key step in the nitrogen assimilation discussed above. The enzymes GOGAT and GDH catalyze the nitrogen assimilation reactions.
In bacteria, the enzyme glutamate 5-kinase initiates the biosynthesis of proline by transferring a phosphate group from ATP onto glutamate. The next reaction is catalyzed by the enzyme pyrroline-5-carboxylate synthase (P5CS), which catalyzes the reduction of the ϒ-carboxyl group of L-glutamate 5-phosphate. This results in the formation of glutamate semialdehyde, which spontaneously cyclizes to pyrroline-5-carboxylate. Pyrroline-5-carboxylate is further reduced by the enzyme pyrroline-5-carboxylate reductase (P5CR) to yield a proline amino acid.
In the first step of arginine biosynthesis in bacteria, glutamate is acetylated by transferring the acetyl group from acetyl-CoA at the N-α position; this prevents spontaneous cyclization. The enzyme N-acetylglutamate synthase (glutamate N-acetyltransferase) is responsible for catalyzing the acetylation step. Subsequent steps are catalyzed by the enzymes N-acetylglutamate kinase, N-acetyl-gamma-glutamyl-phosphate reductase, and acetylornithine/succinyldiamino pimelate aminotransferase and yield the N-acetyl-L-ornithine. The acetyl group of acetylornithine is removed by the enzyme acetylornithinase (AO) or ornithine acetyltransferase (OAT), and this yields ornithine. Then, the enzymes citrulline and argininosuccinate convert ornithine to arginine.
There are two distinct lysine biosynthetic pathways: the diaminopimelic acid pathway and the α-aminoadipate pathway. The most common of the two synthetic pathways is the diaminopimelic acid pathway; it consists of several enzymatic reactions that add carbon groups to aspartate to yield lysine:
Aspartate kinase initiates the diaminopimelic acid pathway by phosphorylating aspartate and producing aspartyl phosphate.
Aspartate semialdehyde dehydrogenase catalyzes the NADPH-dependent reduction of aspartyl phosphate to yield aspartate semialdehyde.
4-hydroxy-tetrahydrodipicolinate synthase adds a pyruvate group to the β-aspartyl-4-semialdehyde, and a water molecule is removed. This causes cyclization and gives rise to (2S,4S)-4-hydroxy-2,3,4,5-tetrahydrodipicolinate.
4-hydroxy-tetrahydrodipicolinate reductase catalyzes the reduction of (2S,4S)-4-hydroxy-2,3,4,5-tetrahydrodipicolinate by NADPH to yield Δ'-piperideine-2,6-dicarboxylate (2,3,4,5-tetrahydrodipicolinate) and H2O.
Tetrahydrodipicolinate acyltransferase catalyzes the acetylation reaction that results in ring opening and yields N-acetyl α-amino-ε-ketopimelate.
N-succinyl-α-amino-ε-ketopimelate-glutamate aminotransaminase catalyzes the transamination reaction that removes the keto group of N-acetyl α-amino-ε-ketopimelate and replaces it with an amino group to yield N-succinyl-L-diaminopimelate.
N-acyldiaminopimelate deacylase catalyzes the deacylation of N-succinyl-L-diaminopimelate to yield L,L-diaminopimelate.
DAP epimerase catalyzes the conversion of L,L-diaminopimelate to the meso form of L,L-diaminopimelate.
DAP decarboxylase catalyzes the removal of the carboxyl group, yielding L-lysine.
The serine family of amino acids
The serine family of amino acid includes: serine, cysteine, and glycine. Most microorganisms and plants obtain the sulfur for synthesizing methionine from the amino acid cysteine. Furthermore, the conversion of serine to glycine provides the carbons needed for the biosynthesis of the methionine and histidine.
During serine biosynthesis, the enzyme phosphoglycerate dehydrogenase catalyzes the initial reaction that oxidizes 3-phospho-D-glycerate to yield 3-phosphonooxypyruvate. The following reaction is catalyzed by the enzyme phosphoserine aminotransferase, which transfers an amino group from glutamate onto 3-phosphonooxypyruvate to yield L-phosphoserine. The final step is catalyzed by the enzyme phosphoserine phosphatase, which dephosphorylates L-phosphoserine to yield L-serine.
There are two known pathways for the biosynthesis of glycine. Organisms that use ethanol and acetate as the major carbon source utilize the glyconeogenic pathway to synthesize glycine. The other pathway of glycine biosynthesis is known as the glycolytic pathway. This pathway converts serine synthesized from the intermediates of glycolysis to glycine. In the glycolytic pathway, the enzyme serine hydroxymethyltransferase catalyzes the cleavage of serine to yield glycine and transfers the cleaved carbon group of serine onto tetrahydrofolate, forming 5,10-methylene-tetrahydrofolate.
Cysteine biosynthesis is a two-step reaction that involves the incorporation of inorganic sulfur. In microorganisms and plants, the enzyme serine acetyltransferase catalyzes the transfer of acetyl group from acetyl-CoA onto L-serine to yield O-acetyl-L-serine. The following reaction step, catalyzed by the enzyme O-acetyl serine (thiol) lyase, replaces the acetyl group of O-acetyl-L-serine with sulfide to yield cysteine.
The aspartate family of amino acids
The aspartate family of amino acids includes: threonine, lysine, methionine, isoleucine, and aspartate. Lysine and isoleucine are considered part of the aspartate family even though part of their carbon skeleton is derived from pyruvate. In the case of methionine, the methyl carbon is derived from serine and the sulfur group, but in most organisms, it is derived from cysteine.
The biosynthesis of aspartate is a one step reaction that is catalyzed by a single enzyme. The enzyme aspartate aminotransferase catalyzes the transfer of an amino group from aspartate onto α-ketoglutarate to yield glutamate and oxaloacetate. Asparagine is synthesized by an ATP-dependent addition of an amino group onto aspartate; asparagine synthetase catalyzes the addition of nitrogen from glutamine or soluble ammonia to aspartate to yield asparagine.
The diaminopimelic acid biosynthetic pathway of lysine belongs to the aspartate family of amino acids. This pathway involves nine enzyme-catalyzed reactions that convert aspartate to lysine.
Aspartate kinase catalyzes the initial step in the diaminopimelic acid pathway by transferring a phosphoryl from ATP onto the carboxylate group of aspartate, which yields aspartyl-β-phosphate.
Aspartate-semialdehyde dehydrogenase catalyzes the reduction reaction by dephosphorylation of aspartyl-β-phosphate to yield aspartate-β-semialdehyde.
Dihydrodipicolinate synthase catalyzes the condensation reaction of aspartate-β-semialdehyde with pyruvate to yield dihydrodipicolinic acid.
4-hydroxy-tetrahydrodipicolinate reductase catalyzes the reduction of dihydrodipicolinic acid to yield tetrahydrodipicolinic acid.
Tetrahydrodipicolinate N-succinyltransferase catalyzes the transfer of a succinyl group from succinyl-CoA on to tetrahydrodipicolinic acid to yield N-succinyl-L-2,6-diaminoheptanedioate.
N-succinyldiaminopimelate aminotransferase catalyzes the transfer of an amino group from glutamate onto N-succinyl-L-2,6-diaminoheptanedioate to yield N-succinyl-L,L-diaminopimelic acid.
Succinyl-diaminopimelate desuccinylase catalyzes the removal of acyl group from N-succinyl-L,L-diaminopimelic acid to yield L,L-diaminopimelic acid.
Diaminopimelate epimerase catalyzes the inversion of the α-carbon of L,L-diaminopimelic acid to yield meso-diaminopimelic acid.
Siaminopimelate decarboxylase catalyzes the final step in lysine biosynthesis that removes the carbon dioxide group from meso-diaminopimelic acid to yield L-lysine.
Proteins
Protein synthesis occurs via a process called translation. During translation, genetic material called mRNA is read by ribosomes to generate a protein polypeptide chain. This process requires transfer RNA (tRNA) which serves as an adaptor by binding amino acids on one end and interacting with mRNA at the other end; the latter pairing between the tRNA and mRNA ensures that the correct amino acid is added to the chain. Protein synthesis occurs in three phases: initiation, elongation, and termination. Prokaryotic (archaeal and bacterial) translation differs from eukaryotic translation; however, this section will mostly focus on the commonalities between the two organisms.
Additional background
Before translation can begin, the process of binding a specific amino acid to its corresponding tRNA must occur. This reaction, called tRNA charging, is catalyzed by aminoacyl tRNA synthetase. A specific tRNA synthetase is responsible for recognizing and charging a particular amino acid. Furthermore, this enzyme has special discriminator regions to ensure the correct binding between tRNA and its cognate amino acid. The first step for joining an amino acid to its corresponding tRNA is the formation of aminoacyl-AMP:
{Amino~acid} + ATP <=> {aminoacyl-AMP} + PP_i
This is followed by the transfer of the aminoacyl group from aminoacyl-AMP to a tRNA molecule. The resulting molecule is aminoacyl-tRNA:
{Aminoacyl-AMP} + tRNA <=> {aminoacyl-tRNA} + AMP
The combination of these two steps, both of which are catalyzed by aminoacyl tRNA synthetase, produces a charged tRNA that is ready to add amino acids to the growing polypeptide chain.
In addition to binding an amino acid, tRNA has a three nucleotide unit called an anticodon that base pairs with specific nucleotide triplets on the mRNA called codons; codons encode a specific amino acid. This interaction is possible thanks to the ribosome, which serves as the site for protein synthesis. The ribosome possesses three tRNA binding sites: the aminoacyl site (A site), the peptidyl site (P site), and the exit site (E site).
There are numerous codons within an mRNA transcript, and it is very common for an amino acid to be specified by more than one codon; this phenomenon is called degeneracy. In all, there are 64 codons, 61 of each code for one of the 20 amino acids, while the remaining codons specify chain termination.
Translation in steps
As previously mentioned, translation occurs in three phases: initiation, elongation, and termination.
Step 1: Initiation
The completion of the initiation phase is dependent on the following three events:
1. The recruitment of the ribosome to mRNA
2. The binding of a charged initiator tRNA into the P site of the ribosome
3. The proper alignment of the ribosome with mRNA's start codon
Step 2: Elongation
Following initiation, the polypeptide chain is extended via anticodon:codon interactions, with the ribosome adding amino acids to the polypeptide chain one at a time. The following steps must occur to ensure the correct addition of amino acids:
1. The binding of the correct tRNA into the A site of the ribosome
2. The formation of a peptide bond between the tRNA in the A site and the polypeptide chain attached to the tRNA in the P site
3. Translocation or advancement of the tRNA-mRNA complex by three nucleotides
Translocation "kicks off" the tRNA at the E site and shifts the tRNA from the A site into the P site, leaving the A site free for an incoming tRNA to add another amino acid.
Step 3: Termination
The last stage of translation occurs when a stop codon enters the A site. Then, the following steps occur:
1. The recognition of codons by release factors, which causes the hydrolysis of the polypeptide chain from the tRNA located in the P site
2. The release of the polypeptide chain
3. The dissociation and "recycling" of the ribosome for future translation processes
A summary table of the key players in translation is found below:
Diseases associated with macromolecule deficiency
Errors in biosynthetic pathways can have deleterious consequences including the malformation of macromolecules or the underproduction of functional molecules. Below are examples that illustrate the disruptions that occur due to these inefficiencies.
Familial hypercholesterolemia: this disorder is characterized by the absence of functional receptors for LDL. Deficiencies in the formation of LDL receptors may cause faulty receptors which disrupt the endocytic pathway, inhibiting the entry of LDL into the liver and other cells. This causes a buildup of LDL in the blood plasma, which results in atherosclerotic plaques that narrow arteries and increase the risk of heart attacks.
Lesch–Nyhan syndrome: this genetic disease is characterized by self- mutilation, mental deficiency, and gout. It is caused by the absence of hypoxanthine-guanine phosphoribosyltransferase, which is a necessary enzyme for purine nucleotide formation. The lack of enzyme reduces the level of necessary nucleotides and causes the accumulation of biosynthesis intermediates, which results in the aforementioned unusual behavior.
Severe combined immunodeficiency (SCID): SCID is characterized by a loss of T cells. Shortage of these immune system components increases the susceptibility to infectious agents because the affected individuals cannot develop immunological memory. This immunological disorder results from a deficiency in adenosine deaminase activity, which causes a buildup of dATP. These dATP molecules then inhibit ribonucleotide reductase, which prevents of DNA synthesis.
Huntington's disease: this neurological disease is caused from errors that occur during DNA synthesis. These errors or mutations lead to the expression of a mutant huntingtin protein, which contains repetitive glutamine residues that are encoded by expanding CAG trinucleotide repeats in the gene. Huntington's disease is characterized by neuronal loss and gliosis. Symptoms of the disease include: movement disorder, cognitive decline, and behavioral disorder.
See also
Lipids
Phospholipid bilayer
Nucleotides
DNA
DNA replication
Proteinogenic amino acid
Codon table
Prostaglandin
Porphyrins
Chlorophylls and bacteriochlorophylls
Vitamin B12
References
Biochemical reactions
Metabolism | 0.808925 | 0.990682 | 0.801387 |
Quantitative structure–activity relationship | Quantitative structure–activity relationship models (QSAR models) are regression or classification models used in the chemical and biological sciences and engineering. Like other regression models, QSAR regression models relate a set of "predictor" variables (X) to the potency of the response variable (Y), while classification QSAR models relate the predictor variables to a categorical value of the response variable.
In QSAR modeling, the predictors consist of physico-chemical properties or theoretical molecular descriptors of chemicals; the QSAR response-variable could be a biological activity of the chemicals. QSAR models first summarize a supposed relationship between chemical structures and biological activity in a data-set of chemicals. Second, QSAR models predict the activities of new chemicals.
Related terms include quantitative structure–property relationships (QSPR) when a chemical property is modeled as the response variable.
"Different properties or behaviors of chemical molecules have been investigated in the field of QSPR. Some examples are quantitative structure–reactivity relationships (QSRRs), quantitative structure–chromatography relationships (QSCRs) and, quantitative structure–toxicity relationships (QSTRs), quantitative structure–electrochemistry relationships (QSERs), and quantitative structure–biodegradability relationships (QSBRs)."
As an example, biological activity can be expressed quantitatively as the concentration of a substance required to give a certain biological response. Additionally, when physicochemical properties or structures are expressed by numbers, one can find a mathematical relationship, or quantitative structure-activity relationship, between the two. The mathematical expression, if carefully validated, can then be used to predict the modeled response of other chemical structures.
A QSAR has the form of a mathematical model:
Activity = f(physiochemical properties and/or structural properties) + error
The error includes model error (bias) and observational variability, that is, the variability in observations even on a correct model.
Essential steps in QSAR studies
The principal steps of QSAR/QSPR include:
Selection of data set and extraction of structural/empirical descriptors
Variable selection
Model construction
Validation evaluation
SAR and the SAR paradox
The basic assumption for all molecule-based hypotheses is that similar molecules have similar activities. This principle is also called Structure–Activity Relationship (SAR). The underlying problem is therefore how to define a small difference on a molecular level, since each kind of activity, e.g. reaction ability, biotransformation ability, solubility, target activity, and so on, might depend on another difference. Examples were given in the bioisosterism reviews by Patanie/LaVoie and Brown.
In general, one is more interested in finding strong trends. Created hypotheses usually rely on a finite number of chemicals, so care must be taken to avoid overfitting: the generation of hypotheses that fit training data very closely but perform poorly when applied to new data.
The SAR paradox refers to the fact that it is not the case that all similar molecules have similar activities .
Types
Fragment based (group contribution)
Analogously, the "partition coefficient"—a measurement of differential solubility and itself a component of QSAR predictions—can be predicted either by atomic methods (known as "XLogP" or "ALogP") or by chemical fragment methods (known as "CLogP" and other variations). It has been shown that the logP of compound can be determined by the sum of its fragments; fragment-based methods are generally accepted as better predictors than atomic-based methods. Fragmentary values have been determined statistically, based on empirical data for known logP values. This method gives mixed results and is generally not trusted to have accuracy of more than ±0.1 units.
Group or fragment-based QSAR is also known as GQSAR. GQSAR allows flexibility to study various molecular fragments of interest in relation to the variation in biological response. The molecular fragments could be substituents at various substitution sites in congeneric set of molecules or could be on the basis of pre-defined chemical rules in case of non-congeneric sets. GQSAR also considers cross-terms fragment descriptors, which could be helpful in identification of key fragment interactions in determining variation of activity.
Lead discovery using fragnomics is an emerging paradigm. In this context FB-QSAR proves to be a promising strategy for fragment library design and in fragment-to-lead identification endeavours.
An advanced approach on fragment or group-based QSAR based on the concept of pharmacophore-similarity is developed. This method, pharmacophore-similarity-based QSAR (PS-QSAR) uses topological pharmacophoric descriptors to develop QSAR models. This activity prediction may assist the contribution of certain pharmacophore features encoded by respective fragments toward activity improvement and/or detrimental effects.
3D-QSAR
The acronym 3D-QSAR or 3-D QSAR refers to the application of force field calculations requiring three-dimensional structures of a given set of small molecules with known activities (training set). The training set needs to be superimposed (aligned) by either experimental data (e.g. based on ligand-protein crystallography) or molecule superimposition software. It uses computed potentials, e.g. the Lennard-Jones potential, rather than experimental constants and is concerned with the overall molecule rather than a single substituent. The first 3-D QSAR was named Comparative Molecular Field Analysis (CoMFA) by Cramer et al. It examined the steric fields (shape of the molecule) and the electrostatic fields which were correlated by means of partial least squares regression (PLS).
The created data space is then usually reduced by a following feature extraction (see also dimensionality reduction). The following learning method can be any of the already mentioned machine learning methods, e.g. support vector machines. An alternative approach uses multiple-instance learning by encoding molecules as sets of data instances, each of which represents a possible molecular conformation. A label or response is assigned to each set corresponding to the activity of the molecule, which is assumed to be determined by at least one instance in the set (i.e. some conformation of the molecule).
On June 18, 2011 the Comparative Molecular Field Analysis (CoMFA) patent has dropped any restriction on the use of GRID and partial least-squares (PLS) technologies.
Chemical descriptor based
In this approach, descriptors quantifying various electronic, geometric, or steric properties of a molecule are computed and used to develop a QSAR. This approach is different from the fragment (or group contribution) approach in that the descriptors are computed for the system as whole rather than from the properties of individual fragments. This approach is different from the 3D-QSAR approach in that the descriptors are computed from scalar quantities (e.g., energies, geometric parameters) rather than from 3D fields.
An example of this approach is the QSARs developed for olefin polymerization by half sandwich compounds.
String based
It has been shown that activity prediction is even possible based purely on the SMILES string.
Graph based
Similarly to string-based methods, the molecular graph can directly be used as input for QSAR models, but usually yield inferior performance compared to descriptor-based QSAR models.
Modeling
In the literature it can be often found that chemists have a preference for partial least squares (PLS) methods, since it applies the feature extraction and induction in one step.
Data mining approach
Computer SAR models typically calculate a relatively large number of features. Because those lack structural interpretation ability, the preprocessing steps face a feature selection problem (i.e., which structural features should be interpreted to determine the structure-activity relationship). Feature selection can be accomplished by visual inspection (qualitative selection by a human); by data mining; or by molecule mining.
A typical data mining based prediction uses e.g. support vector machines, decision trees, artificial neural networks for inducing a predictive learning model.
Molecule mining approaches, a special case of structured data mining approaches, apply a similarity matrix based prediction or an automatic fragmentation scheme into molecular substructures. Furthermore, there exist also approaches using maximum common subgraph searches or graph kernels.
Matched molecular pair analysis
Typically QSAR models derived from non linear machine learning is seen as a "black box", which fails to guide medicinal chemists. Recently there is a relatively new concept of matched molecular pair analysis or prediction driven MMPA which is coupled with QSAR model in order to identify activity cliffs.
Evaluation of the quality of QSAR models
QSAR modeling produces predictive models derived from application of statistical tools correlating biological activity (including desirable therapeutic effect and undesirable side effects) or physico-chemical properties in QSPR models of chemicals (drugs/toxicants/environmental pollutants) with descriptors representative of molecular structure or properties. QSARs are being applied in many disciplines, for example: risk assessment, toxicity prediction, and regulatory decisions in addition to drug discovery and lead optimization. Obtaining a good quality QSAR model depends on many factors, such as the quality of input data, the choice of descriptors and statistical methods for modeling and for validation. Any QSAR modeling should ultimately lead to statistically robust and predictive models capable of making accurate and reliable predictions of the modeled response of new compounds.
For validation of QSAR models, usually various strategies are adopted:
internal validation or cross-validation (actually, while extracting data, cross validation is a measure of model robustness, the more a model is robust (higher q2) the less data extraction perturb the original model);
external validation by splitting the available data set into training set for model development and prediction set for model predictivity check;
blind external validation by application of model on new external data and
data randomization or Y-scrambling for verifying the absence of chance correlation between the response and the modeling descriptors.
The success of any QSAR model depends on accuracy of the input data, selection of appropriate descriptors and statistical tools, and most importantly validation of the developed model. Validation is the process by which the reliability and relevance of a procedure are established for a specific purpose; for QSAR models validation must be mainly for robustness, prediction performances and applicability domain (AD) of the models.
Some validation methodologies can be problematic. For example, leave one-out cross-validation generally leads to an overestimation of predictive capacity. Even with external validation, it is difficult to determine whether the selection of training and test sets was manipulated to maximize the predictive capacity of the model being published.
Different aspects of validation of QSAR models that need attention include methods of selection of training set compounds, setting training set size and impact of variable selection for training set models for determining the quality of prediction. Development of novel validation parameters for judging quality of QSAR models is also important.
Application
Chemical
One of the first historical QSAR applications was to predict boiling points.
It is well known for instance that within a particular family of chemical compounds, especially of organic chemistry, that there are strong correlations between structure and observed properties. A simple example is the relationship between the number of carbons in alkanes and their boiling points. There is a clear trend in the increase of boiling point with an increase in the number carbons, and this serves as a means for predicting the boiling points of higher alkanes.
A still very interesting application is the Hammett equation, Taft equation and pKa prediction methods.
Biological
The biological activity of molecules is usually measured in assays to establish the level of inhibition of particular signal transduction or metabolic pathways. Drug discovery often involves the use of QSAR to identify chemical structures that could have good inhibitory effects on specific targets and have low toxicity (non-specific activity). Of special interest is the prediction of partition coefficient log P, which is an important measure used in identifying "druglikeness" according to Lipinski's Rule of Five.
While many quantitative structure activity relationship analyses involve the interactions of a family of molecules with an enzyme or receptor binding site, QSAR can also be used to study the interactions between the structural domains of proteins. Protein-protein interactions can be quantitatively analyzed for structural variations resulted from site-directed mutagenesis.
It is part of the machine learning method to reduce the risk for a SAR paradox, especially taking into account that only a finite amount of data is available (see also MVUE). In general, all QSAR problems can be divided into coding
and learning.
Applications
(Q)SAR models have been used for risk management. QSARS are suggested by regulatory authorities; in the European Union, QSARs are suggested by the REACH regulation, where "REACH" abbreviates "Registration, Evaluation, Authorisation and Restriction of Chemicals". Regulatory application of QSAR methods includes in silico toxicological assessment of genotoxic impurities. Commonly used QSAR assessment software such as DEREK or CASE Ultra (MultiCASE) is used to genotoxicity of impurity according to ICH M7.
The chemical descriptor space whose convex hull is generated by a particular training set of chemicals is called the training set's applicability domain. Prediction of properties of novel chemicals that are located outside the applicability domain uses extrapolation, and so is less reliable (on average) than prediction within the applicability domain. The assessment of the reliability of QSAR predictions remains a research topic.
The QSAR equations can be used to predict biological activities of newer molecules before their synthesis.
Examples of machine learning tools for QSAR modeling include:
See also
References
Further reading
External links
Chemoinformatics Tools , Drug Theoretics and Cheminformatics Laboratory
Multiscale Conceptual Model Figures for QSARs in Biological and Environmental Science
Medicinal chemistry
Drug discovery
Cheminformatics
Computational chemistry
Structure-Activity Relationship paradox | 0.81108 | 0.987538 | 0.800972 |
Biophysics | Biophysics is an interdisciplinary science that applies approaches and methods traditionally used in physics to study biological phenomena. Biophysics covers all scales of biological organization, from molecular to organismic and populations. Biophysical research shares significant overlap with biochemistry, molecular biology, physical chemistry, physiology, nanotechnology, bioengineering, computational biology, biomechanics, developmental biology and systems biology.
The term biophysics was originally introduced by Karl Pearson in 1892. The term biophysics is also regularly used in academia to indicate the study of the physical quantities (e.g. electric current, temperature, stress, entropy) in biological systems. Other biological sciences also perform research on the biophysical properties of living organisms including molecular biology, cell biology, chemical biology, and biochemistry.
Overview
Molecular biophysics typically addresses biological questions similar to those in biochemistry and molecular biology, seeking to find the physical underpinnings of biomolecular phenomena. Scientists in this field conduct research concerned with understanding the interactions between the various systems of a cell, including the interactions between DNA, RNA and protein biosynthesis, as well as how these interactions are regulated. A great variety of techniques are used to answer these questions.
Fluorescent imaging techniques, as well as electron microscopy, x-ray crystallography, NMR spectroscopy, atomic force microscopy (AFM) and small-angle scattering (SAS) both with X-rays and neutrons (SAXS/SANS) are often used to visualize structures of biological significance. Protein dynamics can be observed by neutron spin echo spectroscopy. Conformational change in structure can be measured using techniques such as dual polarisation interferometry, circular dichroism, SAXS and SANS. Direct manipulation of molecules using optical tweezers or AFM, can also be used to monitor biological events where forces and distances are at the nanoscale. Molecular biophysicists often consider complex biological events as systems of interacting entities which can be understood e.g. through statistical mechanics, thermodynamics and chemical kinetics. By drawing knowledge and experimental techniques from a wide variety of disciplines, biophysicists are often able to directly observe, model or even manipulate the structures and interactions of individual molecules or complexes of molecules.
In addition to traditional (i.e. molecular and cellular) biophysical topics like structural biology or enzyme kinetics, modern biophysics encompasses an extraordinarily broad range of research, from bioelectronics to quantum biology involving both experimental and theoretical tools. It is becoming increasingly common for biophysicists to apply the models and experimental techniques derived from physics, as well as mathematics and statistics, to larger systems such as tissues, organs, populations and ecosystems. Biophysical models are used extensively in the study of electrical conduction in single neurons, as well as neural circuit analysis in both tissue and whole brain.
Medical physics, a branch of biophysics, is any application of physics to medicine or healthcare, ranging from radiology to microscopy and nanomedicine. For example, physicist Richard Feynman theorized about the future of nanomedicine. He wrote about the idea of a medical use for biological machines (see nanomachines). Feynman and Albert Hibbs suggested that certain repair machines might one day be reduced in size to the point that it would be possible to (as Feynman put it) "swallow the doctor". The idea was discussed in Feynman's 1959 essay There's Plenty of Room at the Bottom.
History
The studies of Luigi Galvani (1737–1798) laid groundwork for the later field of biophysics. Some of the earlier studies in biophysics were conducted in the 1840s by a group known as the Berlin school of physiologists. Among its members were pioneers such as Hermann von Helmholtz, Ernst Heinrich Weber, Carl F. W. Ludwig, and Johannes Peter Müller.
William T. Bovie (1882–1958) is credited as a leader of the field's further development in the mid-20th century. He was a leader in developing electrosurgery.
The popularity of the field rose when the book What Is Life?'' by Erwin Schrödinger was published. Since 1957, biophysicists have organized themselves into the Biophysical Society which now has about 9,000 members over the world.
Some authors such as Robert Rosen criticize biophysics on the ground that the biophysical method does not take into account the specificity of biological phenomena.
Focus as a subfield
While some colleges and universities have dedicated departments of biophysics, usually at the graduate level, many do not have university-level biophysics departments, instead having groups in related departments such as biochemistry, cell biology, chemistry, computer science, engineering, mathematics, medicine, molecular biology, neuroscience, pharmacology, physics, and physiology. Depending on the strengths of a department at a university differing emphasis will be given to fields of biophysics. What follows is a list of examples of how each department applies its efforts toward the study of biophysics. This list is hardly all inclusive. Nor does each subject of study belong exclusively to any particular department. Each academic institution makes its own rules and there is much overlap between departments.
Biology and molecular biology – Gene regulation, single protein dynamics, bioenergetics, patch clamping, biomechanics, virophysics.
Structural biology – Ångstrom-resolution structures of proteins, nucleic acids, lipids, carbohydrates, and complexes thereof.
Biochemistry and chemistry – biomolecular structure, siRNA, nucleic acid structure, structure-activity relationships.
Computer science – Neural networks, biomolecular and drug databases.
Computational chemistry – molecular dynamics simulation, molecular docking, quantum chemistry
Bioinformatics – sequence alignment, structural alignment, protein structure prediction
Mathematics – graph/network theory, population modeling, dynamical systems, phylogenetics.
Medicine – biophysical research that emphasizes medicine. Medical biophysics is a field closely related to physiology. It explains various aspects and systems of the body from a physical and mathematical perspective. Examples are fluid dynamics of blood flow, gas physics of respiration, radiation in diagnostics/treatment and much more. Biophysics is taught as a preclinical subject in many medical schools, mainly in Europe.
Neuroscience – studying neural networks experimentally (brain slicing) as well as theoretically (computer models), membrane permittivity.
Pharmacology and physiology – channelomics, electrophysiology, biomolecular interactions, cellular membranes, polyketides.
Physics – negentropy, stochastic processes, and the development of new physical techniques and instrumentation as well as their application.
Quantum biology – The field of quantum biology applies quantum mechanics to biological objects and problems. Decohered isomers to yield time-dependent base substitutions. These studies imply applications in quantum computing.
Agronomy and agriculture
Many biophysical techniques are unique to this field. Research efforts in biophysics are often initiated by scientists who were biologists, chemists or physicists by training.
See also
Biophysical Society
Index of biophysics articles
List of publications in biology – Biophysics
List of publications in physics – Biophysics
List of biophysicists
Outline of biophysics
Biophysical chemistry
European Biophysical Societies' Association
Mathematical and theoretical biology
Medical biophysics
Membrane biophysics
Molecular biophysics
Neurophysics
Physiomics
Virophysics
Single-particle trajectory
References
Sources
External links
Biophysical Society
Journal of Physiology: 2012 virtual issue Biophysics and Beyond
bio-physics-wiki
Link archive of learning resources for students: biophysika.de (60% English, 40% German)
Applied and interdisciplinary physics | 0.804531 | 0.994899 | 0.800427 |
Organic chemistry | Organic chemistry is a subdiscipline within chemistry involving the scientific study of the structure, properties, and reactions of organic compounds and organic materials, i.e., matter in its various forms that contain carbon atoms. Study of structure determines their structural formula. Study of properties includes physical and chemical properties, and evaluation of chemical reactivity to understand their behavior. The study of organic reactions includes the chemical synthesis of natural products, drugs, and polymers, and study of individual organic molecules in the laboratory and via theoretical (in silico) study.
The range of chemicals studied in organic chemistry includes hydrocarbons (compounds containing only carbon and hydrogen) as well as compounds based on carbon, but also containing other elements, especially oxygen, nitrogen, sulfur, phosphorus (included in many biochemicals) and the halogens. Organometallic chemistry is the study of compounds containing carbon–metal bonds.
In addition, contemporary research focuses on organic chemistry involving other organometallics including the lanthanides, but especially the transition metals zinc, copper, palladium, nickel, cobalt, titanium and chromium.
Organic compounds form the basis of all earthly life and constitute the majority of known chemicals. The bonding patterns of carbon, with its valence of four—formal single, double, and triple bonds, plus structures with delocalized electrons—make the array of organic compounds structurally diverse, and their range of applications enormous. They form the basis of, or are constituents of, many commercial products including pharmaceuticals; petrochemicals and agrichemicals, and products made from them including lubricants, solvents; plastics; fuels and explosives. The study of organic chemistry overlaps organometallic chemistry and biochemistry, but also with medicinal chemistry, polymer chemistry, and materials science.
Educational aspects
Organic chemistry is typically taught at the college or university level. It is considered a very challenging course, but has also been made accessible to students.
History
Before the 18th century, chemists generally believed that compounds obtained from living organisms were endowed with a vital force that distinguished them from inorganic compounds. According to the concept of vitalism (vital force theory), organic matter was endowed with a "vital force". During the first half of the nineteenth century, some of the first systematic studies of organic compounds were reported. Around 1816 Michel Chevreul started a study of soaps made from various fats and alkalis. He separated the acids that, in combination with the alkali, produced the soap. Since these were all individual compounds, he demonstrated that it was possible to make a chemical change in various fats (which traditionally come from organic sources), producing new compounds, without "vital force". In 1828 Friedrich Wöhler produced the organic chemical urea (carbamide), a constituent of urine, from inorganic starting materials (the salts potassium cyanate and ammonium sulfate), in what is now called the Wöhler synthesis. Although Wöhler himself was cautious about claiming he had disproved vitalism, this was the first time a substance thought to be organic was synthesized in the laboratory without biological (organic) starting materials. The event is now generally accepted as indeed disproving the doctrine of vitalism.
After Wöhler, Justus von Liebig worked on the organization of organic chemistry, being considered one of its principal founders.
In 1856, William Henry Perkin, while trying to manufacture quinine, accidentally produced the organic dye now known as Perkin's mauve. His discovery, made widely known through its financial success, greatly increased interest in organic chemistry.
A crucial breakthrough for organic chemistry was the concept of chemical structure, developed independently in 1858 by both Friedrich August Kekulé and Archibald Scott Couper. Both researchers suggested that tetravalent carbon atoms could link to each other to form a carbon lattice, and that the detailed patterns of atomic bonding could be discerned by skillful interpretations of appropriate chemical reactions.
The era of the pharmaceutical industry began in the last decade of the 19th century when the German company, Bayer, first manufactured acetylsalicylic acid—more commonly known as aspirin. By 1910 Paul Ehrlich and his laboratory group began developing arsenic-based arsphenamine, (Salvarsan), as the first effective medicinal treatment of syphilis, and thereby initiated the medical practice of chemotherapy. Ehrlich popularized the concepts of "magic bullet" drugs and of systematically improving drug therapies. His laboratory made decisive contributions to developing antiserum for diphtheria and standardizing therapeutic serums.
Early examples of organic reactions and applications were often found because of a combination of luck and preparation for unexpected observations. The latter half of the 19th century however witnessed systematic studies of organic compounds. The development of synthetic indigo is illustrative. The production of indigo from plant sources dropped from 19,000 tons in 1897 to 1,000 tons by 1914 thanks to the synthetic methods developed by Adolf von Baeyer. In 2002, 17,000 tons of synthetic indigo were produced from petrochemicals.
In the early part of the 20th century, polymers and enzymes were shown to be large organic molecules, and petroleum was shown to be of biological origin.
The multiple-step synthesis of complex organic compounds is called total synthesis. Total synthesis of complex natural compounds increased in complexity to glucose and terpineol. For example, cholesterol-related compounds have opened ways to synthesize complex human hormones and their modified derivatives. Since the start of the 20th century, complexity of total syntheses has been increased to include molecules of high complexity such as lysergic acid and vitamin B12.
The discovery of petroleum and the development of the petrochemical industry spurred the development of organic chemistry. Converting individual petroleum compounds into types of compounds by various chemical processes led to organic reactions enabling a broad range of industrial and commercial products including, among (many) others: plastics, synthetic rubber, organic adhesives, and various property-modifying petroleum additives and catalysts.
The majority of chemical compounds occurring in biological organisms are carbon compounds, so the association between organic chemistry and biochemistry is so close that biochemistry might be regarded as in essence a branch of organic chemistry. Although the history of biochemistry might be taken to span some four centuries, fundamental understanding of the field only began to develop in the late 19th century and the actual term biochemistry was coined around the start of 20th century. Research in the field increased throughout the twentieth century, without any indication of slackening in the rate of increase, as may be verified by inspection of abstraction and indexing services such as BIOSIS Previews and Biological Abstracts, which began in the 1920s as a single annual volume, but has grown so drastically that by the end of the 20th century it was only available to the everyday user as an online electronic database.
Characterization
Since organic compounds often exist as mixtures, a variety of techniques have also been developed to assess purity; chromatography techniques are especially important for this application, and include HPLC and gas chromatography. Traditional methods of separation include distillation, crystallization, evaporation, magnetic separation and solvent extraction.
Organic compounds were traditionally characterized by a variety of chemical tests, called "wet methods", but such tests have been largely displaced by spectroscopic or other computer-intensive methods of analysis. Listed in approximate order of utility, the chief analytical methods are:
Nuclear magnetic resonance (NMR) spectroscopy is the most commonly used technique, often permitting the complete assignment of atom connectivity and even stereochemistry using correlation spectroscopy. The principal constituent atoms of organic chemistry – hydrogen and carbon – exist naturally with NMR-responsive isotopes, respectively 1H and 13C.
Elemental analysis: A destructive method used to determine the elemental composition of a molecule. See also mass spectrometry, below.
Mass spectrometry indicates the molecular weight of a compound and, from the fragmentation patterns, its structure. High-resolution mass spectrometry can usually identify the exact formula of a compound and is used in place of elemental analysis. In former times, mass spectrometry was restricted to neutral molecules exhibiting some volatility, but advanced ionization techniques allow one to obtain the "mass spec" of virtually any organic compound.
Crystallography can be useful for determining molecular geometry when a single crystal of the material is available. Highly efficient hardware and software allows a structure to be determined within hours of obtaining a suitable crystal.
Traditional spectroscopic methods such as infrared spectroscopy, optical rotation, and UV/VIS spectroscopy provide relatively nonspecific structural information but remain in use for specific applications. Refractive index and density can also be important for substance identification.
Properties
The physical properties of organic compounds typically of interest include both quantitative and qualitative features. Quantitative information includes a melting point, boiling point, solubility, and index of refraction. Qualitative properties include odor, consistency, and color.
Melting and boiling properties
Organic compounds typically melt and many boil. In contrast, while inorganic materials generally can be melted, many do not boil, and instead tend to degrade. In earlier times, the melting point (m.p.) and boiling point (b.p.) provided crucial information on the purity and identity of organic compounds. The melting and boiling points correlate with the polarity of the molecules and their molecular weight. Some organic compounds, especially symmetrical ones, sublime. A well-known example of a sublimable organic compound is para-dichlorobenzene, the odiferous constituent of modern mothballs. Organic compounds are usually not very stable at temperatures above 300 °C, although some exceptions exist.
Solubility
Neutral organic compounds tend to be hydrophobic; that is, they are less soluble in water than inorganic solvents. Exceptions include organic compounds that contain ionizable groups as well as low molecular weight alcohols, amines, and carboxylic acids where hydrogen bonding occurs. Otherwise, organic compounds tend to dissolve in organic solvents. Solubility varies widely with the organic solute and with the organic solvent.
Solid state properties
Various specialized properties of molecular crystals and organic polymers with conjugated systems are of interest depending on applications, e.g. thermo-mechanical and electro-mechanical such as piezoelectricity, electrical conductivity (see conductive polymers and organic semiconductors), and electro-optical (e.g. non-linear optics) properties. For historical reasons, such properties are mainly the subjects of the areas of polymer science and materials science.
Nomenclature
The names of organic compounds are either systematic, following logically from a set of rules, or nonsystematic, following various traditions. Systematic nomenclature is stipulated by specifications from IUPAC (International Union of Pure and Applied Chemistry). Systematic nomenclature starts with the name for a parent structure within the molecule of interest. This parent name is then modified by prefixes, suffixes, and numbers to unambiguously convey the structure. Given that millions of organic compounds are known, rigorous use of systematic names can be cumbersome. Thus, IUPAC recommendations are more closely followed for simple compounds, but not complex molecules. To use the systematic naming, one must know the structures and names of the parent structures. Parent structures include unsubstituted hydrocarbons, heterocycles, and mono functionalized derivatives thereof.
Nonsystematic nomenclature is simpler and unambiguous, at least to organic chemists. Nonsystematic names do not indicate the structure of the compound. They are common for complex molecules, which include most natural products. Thus, the informally named lysergic acid diethylamide is systematically named
(6aR,9R)-N,N-diethyl-7-methyl-4,6,6a,7,8,9-hexahydroindolo-[4,3-fg] quinoline-9-carboxamide.
With the increased use of computing, other naming methods have evolved that are intended to be interpreted by machines. Two popular formats are SMILES and InChI.
Structural drawings
Organic molecules are described more commonly by drawings or structural formulas, combinations of drawings and chemical symbols. The line-angle formula is simple and unambiguous. In this system, the endpoints and intersections of each line represent one carbon, and hydrogen atoms can either be notated explicitly or assumed to be present as implied by tetravalent carbon.
History
By 1880 an explosion in the number of chemical compounds being discovered occurred assisted by new synthetic and analytical techniques. Grignard described the situation as "chaos le plus complet" (complete chaos) due to the lack of convention it was possible to have multiple names for the same compound. This led to the creation of the Geneva rules in 1892.
Classification of organic compounds
Functional groups
The concept of functional groups is central in organic chemistry, both as a means to classify structures and for predicting properties. A functional group is a molecular module, and the reactivity of that functional group is assumed, within limits, to be the same in a variety of molecules. Functional groups can have a decisive influence on the chemical and physical properties of organic compounds. Molecules are classified based on their functional groups. Alcohols, for example, all have the subunit C-O-H. All alcohols tend to be somewhat hydrophilic, usually form esters, and usually can be converted to the corresponding halides. Most functional groups feature heteroatoms (atoms other than C and H). Organic compounds are classified according to functional groups, alcohols, carboxylic acids, amines, etc. Functional groups make the molecule more acidic or basic due to their electronic influence on surrounding parts of the molecule.
As the pKa (aka basicity) of the molecular addition/functional group increases, there is a corresponding dipole, when measured, increases in strength. A dipole directed towards the functional group (higher pKa therefore basic nature of group) points towards it and decreases in strength with increasing distance. Dipole distance (measured in Angstroms) and steric hindrance towards the functional group have an intermolecular and intramolecular effect on the surrounding environment and pH level.
Different functional groups have different pKa values and bond strengths (single, double, triple) leading to increased electrophilicity with lower pKa and increased nucleophile strength with higher pKa. More basic/nucleophilic functional groups desire to attack an electrophilic functional group with a lower pKa on another molecule (intermolecular) or within the same molecule (intramolecular). Any group with a net acidic pKa that gets within range, such as an acyl or carbonyl group is fair game. Since the likelihood of being attacked decreases with an increase in pKa, acyl chloride components with the lowest measured pKa values are most likely to be attacked, followed by carboxylic acids (pKa =4), thiols (13), malonates (13), alcohols (17), aldehydes (20), nitriles (25), esters (25), then amines (35). Amines are very basic, and are great nucleophiles/attackers.
Aliphatic compounds
The aliphatic hydrocarbons are subdivided into three groups of homologous series according to their state of saturation:
alkanes (paraffins): aliphatic hydrocarbons without any double or triple bonds, i.e. just C-C, C-H single bonds
alkenes (olefins): aliphatic hydrocarbons that contain one or more double bonds, i.e. di-olefins (dienes) or poly-olefins.
alkynes (acetylenes): aliphatic hydrocarbons which have one or more triple bonds.
The rest of the group is classified according to the functional groups present. Such compounds can be "straight-chain", branched-chain or cyclic. The degree of branching affects characteristics, such as the octane number or cetane number in petroleum chemistry.
Both saturated (alicyclic) compounds and unsaturated compounds exist as cyclic derivatives. The most stable rings contain five or six carbon atoms, but large rings (macrocycles) and smaller rings are common. The smallest cycloalkane family is the three-membered cyclopropane ((CH2)3). Saturated cyclic compounds contain single bonds only, whereas aromatic rings have an alternating (or conjugated) double bond. Cycloalkanes do not contain multiple bonds, whereas the cycloalkenes and the cycloalkynes do.
Aromatic compounds
Aromatic hydrocarbons contain conjugated double bonds. This means that every carbon atom in the ring is sp2 hybridized, allowing for added stability. The most important example is benzene, the structure of which was formulated by Kekulé who first proposed the delocalization or resonance principle for explaining its structure. For "conventional" cyclic compounds, aromaticity is conferred by the presence of 4n + 2 delocalized pi electrons, where n is an integer. Particular instability (antiaromaticity) is conferred by the presence of 4n conjugated pi electrons.
Heterocyclic compounds
The characteristics of the cyclic hydrocarbons are again altered if heteroatoms are present, which can exist as either substituents attached externally to the ring (exocyclic) or as a member of the ring itself (endocyclic). In the case of the latter, the ring is termed a heterocycle. Pyridine and furan are examples of aromatic heterocycles while piperidine and tetrahydrofuran are the corresponding alicyclic heterocycles. The heteroatom of heterocyclic molecules is generally oxygen, sulfur, or nitrogen, with the latter being particularly common in biochemical systems.
Heterocycles are commonly found in a wide range of products including aniline dyes and medicines. Additionally, they are prevalent in a wide range of biochemical compounds such as alkaloids, vitamins, steroids, and nucleic acids (e.g. DNA, RNA).
Rings can fuse with other rings on an edge to give polycyclic compounds. The purine nucleoside bases are notable polycyclic aromatic heterocycles. Rings can also fuse on a "corner" such that one atom (almost always carbon) has two bonds going to one ring and two to another. Such compounds are termed spiro and are important in several natural products.
Polymers
One important property of carbon is that it readily forms chains, or networks, that are linked by carbon-carbon (carbon-to-carbon) bonds. The linking process is called polymerization, while the chains, or networks, are called polymers. The source compound is called a monomer.
Two main groups of polymers exist synthetic polymers and biopolymers. Synthetic polymers are artificially manufactured, and are commonly referred to as industrial polymers. Biopolymers occur within a respectfully natural environment, or without human intervention.
Biomolecules
Biomolecular chemistry is a major category within organic chemistry which is frequently studied by biochemists. Many complex multi-functional group molecules are important in living organisms. Some are long-chain biopolymers, and these include peptides, DNA, RNA and the polysaccharides such as starches in animals and celluloses in plants. The other main classes are amino acids (monomer building blocks of peptides and proteins), carbohydrates (which includes the polysaccharides), the nucleic acids (which include DNA and RNA as polymers), and the lipids. Besides, animal biochemistry contains many small molecule intermediates which assist in energy production through the Krebs cycle, and produces isoprene, the most common hydrocarbon in animals. Isoprenes in animals form the important steroid structural (cholesterol) and steroid hormone compounds; and in plants form terpenes, terpenoids, some alkaloids, and a class of hydrocarbons called biopolymer polyisoprenoids present in the latex of various species of plants, which is the basis for making rubber. Biologists usually classify the above-mentioned biomolecules into four main groups, i.e., proteins, lipids, carbohydrates, and nucleic acids. Petroleum and its derivatives are considered organic molecules, which is consistent with the fact that this oil comes from the fossilization of living beings, i.e., biomolecules.
See also: peptide synthesis, oligonucleotide synthesis and carbohydrate synthesis.
Small molecules
In pharmacology, an important group of organic compounds is small molecules, also referred to as 'small organic compounds'. In this context, a small molecule is a small organic compound that is biologically active but is not a polymer. In practice, small molecules have a molar mass less than approximately 1000 g/mol.
Fullerenes
Fullerenes and carbon nanotubes, carbon compounds with spheroidal and tubular structures, have stimulated much research into the related field of materials science. The first fullerene was discovered in 1985 by Sir Harold W. Kroto of the United Kingdom and by Richard E. Smalley and Robert F. Curl Jr., of the United States. Using a laser to vaporize graphite rods in an atmosphere of helium gas, these chemists and their assistants obtained cagelike molecules composed of 60 carbon atoms (C60) joined by single and double bonds to form a hollow sphere with 12 pentagonal and 20 hexagonal faces—a design that resembles a football, or soccer ball. In 1996 the trio was awarded the Nobel Prize for their pioneering efforts. The C60 molecule was named buckminsterfullerene (or, more simply, the buckyball) after the American architect R. Buckminster Fuller, whose geodesic dome is constructed on the same structural principles.
Others
Organic compounds containing bonds of carbon to nitrogen, oxygen and the halogens are not normally grouped separately. Others are sometimes put into major groups within organic chemistry and discussed under titles such as organosulfur chemistry, organometallic chemistry, organophosphorus chemistry and organosilicon chemistry.
Organic reactions
Organic reactions are chemical reactions involving organic compounds. Many of these reactions are associated with functional groups. The general theory of these reactions involves careful analysis of such properties as the electron affinity of key atoms, bond strengths and steric hindrance. These factors can determine the relative stability of short-lived reactive intermediates, which usually directly determine the path of the reaction.
The basic reaction types are: addition reactions, elimination reactions, substitution reactions, pericyclic reactions, rearrangement reactions and redox reactions. An example of a common reaction is a substitution reaction written as:
Nu- + C-X -> C-Nu + X-
where X is some functional group and Nu is a nucleophile.
The number of possible organic reactions is infinite. However, certain general patterns are observed that can be used to describe many common or useful reactions. Each reaction has a stepwise reaction mechanism that explains how it happens in sequence—although the detailed description of steps is not always clear from a list of reactants alone.
The stepwise course of any given reaction mechanism can be represented using arrow pushing techniques in which curved arrows are used to track the movement of electrons as starting materials transition through intermediates to final products.
Organic synthesis
Synthetic organic chemistry is an applied science as it borders engineering, the "design, analysis, and/or construction of works for practical purposes". Organic synthesis of a novel compound is a problem-solving task, where a synthesis is designed for a target molecule by selecting optimal reactions from optimal starting materials. Complex compounds can have tens of reaction steps that sequentially build the desired molecule. The synthesis proceeds by utilizing the reactivity of the functional groups in the molecule. For example, a carbonyl compound can be used as a nucleophile by converting it into an enolate, or as an electrophile; the combination of the two is called the aldol reaction. Designing practically useful syntheses always requires conducting the actual synthesis in the laboratory. The scientific practice of creating novel synthetic routes for complex molecules is called total synthesis.
Strategies to design a synthesis include retrosynthesis, popularized by E.J. Corey, which starts with the target molecule and splices it to pieces according to known reactions. The pieces, or the proposed precursors, receive the same treatment, until available and ideally inexpensive starting materials are reached. Then, the retrosynthesis is written in the opposite direction to give the synthesis. A "synthetic tree" can be constructed because each compound and also each precursor has multiple syntheses.
See also
Important publications in organic chemistry
List of organic reactions
Molecular modelling
References
External links
MIT.edu, OpenCourseWare: Organic Chemistry I
HaverFord.edu, Organic Chemistry Lectures, Videos and Text
Organic-Chemistry.org, Organic Chemistry Portal – Recent Abstracts and (Name)Reactions
Orgsyn.org, Organic Chemistry synthesis journal
Pearson Channels, Organic Chemistry Video Lectures and Practice Problems
Khanacademy.org, Khan Academy - Organic Chemistry
Chemistry
Chemistry | 0.801019 | 0.998813 | 0.800068 |
Formal science | Formal science is a branch of science studying disciplines concerned with abstract structures described by formal systems, such as logic, mathematics, statistics, theoretical computer science, artificial intelligence, information theory, game theory, systems theory, decision theory and theoretical linguistics. Whereas the natural sciences and social sciences seek to characterize physical systems and social systems, respectively, using empirical methods, the formal sciences use language tools concerned with characterizing abstract structures described by formal systems. The formal sciences aid the natural and social sciences by providing information about the structures used to describe the physical world, and what inferences may be made about them.
Branches
Logic (also a branch of philosophy)
Mathematics
Statistics
Systems science
Data science
Information science
Computer science
Cryptography
Differences from other sciences
Because of their non-empirical nature, formal sciences are construed by outlining a set of axioms and definitions from which other statements (theorems) are deduced. For this reason, in Rudolf Carnap's logical-positivist conception of the epistemology of science, theories belonging to formal sciences are understood to contain no synthetic statements, instead containing only analytic statements.
See also
Philosophy
Science
Rationalism
Abstract structure
Abstraction in mathematics
Abstraction in computer science
Cognitive science
Formalism (philosophy of mathematics)
Formal grammar
Formal language
Formal method
Formal system
Form and content
Mathematical model
Mathematical sciences
Mathematics Subject Classification
Semiotics
Theory of forms
References
Further reading
Mario Bunge (1985). Philosophy of Science and Technology. Springer.
Mario Bunge (1998). Philosophy of Science. Rev. ed. of: Scientific research. Berlin, New York: Springer-Verlag, 1967.
C. West Churchman (1940). Elements of Logic and Formal Science, J.B. Lippincott Co., New York.
James Franklin (1994). The formal sciences discover the philosophers' stone. In: Studies in History and Philosophy of Science. Vol. 25, No. 4, pp. 513–533, 1994
Stephen Leacock (1906). Elements of Political Science. Houghton, Mifflin Co, 417 pp.
Bernt P. Stigum (1990). Toward a Formal Science of Economics. MIT Press
Marcus Tomalin (2006), Linguistics and the Formal Sciences. Cambridge University Press
William L. Twining (1997). Law in Context: Enlarging a Discipline. 365 pp.
External links
Interdisciplinary conferences — Foundations of the Formal Sciences
Branches of science | 0.799908 | 0.997006 | 0.797513 |
Chemosynthesis | In biochemistry, chemosynthesis is the biological conversion of one or more carbon-containing molecules (usually carbon dioxide or methane) and nutrients into organic matter using the oxidation of inorganic compounds (e.g., hydrogen gas, hydrogen sulfide) or ferrous ions as a source of energy, rather than sunlight, as in photosynthesis. Chemoautotrophs, organisms that obtain carbon from carbon dioxide through chemosynthesis, are phylogenetically diverse. Groups that include conspicuous or biogeochemically important taxa include the sulfur-oxidizing Gammaproteobacteria, the Campylobacterota, the Aquificota, the methanogenic archaea, and the neutrophilic iron-oxidizing bacteria.
Many microorganisms in dark regions of the oceans use chemosynthesis to produce biomass from single-carbon molecules. Two categories can be distinguished. In the rare sites where hydrogen molecules (H2) are available, the energy available from the reaction between CO2 and H2 (leading to production of methane, CH4) can be large enough to drive the production of biomass. Alternatively, in most oceanic environments, energy for chemosynthesis derives from reactions in which substances such as hydrogen sulfide or ammonia are oxidized. This may occur with or without the presence of oxygen.
Many chemosynthetic microorganisms are consumed by other organisms in the ocean, and symbiotic associations between chemosynthesizers and respiring heterotrophs are quite common. Large populations of animals can be supported by chemosynthetic secondary production at hydrothermal vents, methane clathrates, cold seeps, whale falls, and isolated cave water.
It has been hypothesized that anaerobic chemosynthesis may support life below the surface of Mars, Jupiter's moon Europa, and other planets. Chemosynthesis may have also been the first type of metabolism that evolved on Earth, leading the way for cellular respiration and photosynthesis to develop later.
Hydrogen sulfide chemosynthesis process
Giant tube worms use bacteria in their trophosome to fix carbon dioxide (using hydrogen sulfide as their energy source) and produce sugars and amino acids.
Some reactions produce sulfur:
hydrogen sulfide chemosynthesis:
18H2S + 6CO2 + 3O2 → C6H12O6 (carbohydrate) + 12H2O + 18S
Instead of releasing oxygen gas while fixing carbon dioxide as in photosynthesis, hydrogen sulfide chemosynthesis produces solid globules of sulfur in the process. In bacteria capable of chemoautotrophy (a form a chemosynthesis), such as purple sulfur bacteria, yellow globules of sulfur are present and visible in the cytoplasm.
Discovery
In 1890, Sergei Winogradsky proposed a novel type of life process called "anorgoxydant". His discovery suggested that some microbes could live solely on inorganic matter and emerged during his physiological research in the 1880s in Strasbourg and Zürich on sulfur, iron, and nitrogen bacteria.
In 1897, Wilhelm Pfeffer coined the term "chemosynthesis" for the energy production by oxidation of inorganic substances, in association with autotrophic carbon dioxide assimilation—what would be named today as chemolithoautotrophy. Later, the term would be expanded to include also chemoorganoautotrophs, which are organisms that use organic energy substrates in order to assimilate carbon dioxide. Thus, chemosynthesis can be seen as a synonym of chemoautotrophy.
The term "chemotrophy", less restrictive, would be introduced in the 1940s by André Lwoff for the production of energy by the oxidation of electron donors, organic or not, associated with auto- or heterotrophy.
Hydrothermal vents
The suggestion of Winogradsky was confirmed nearly 90 years later, when hydrothermal ocean vents were predicted to exist in the 1970s. The hot springs and strange creatures were discovered by Alvin, the world's first deep-sea submersible, in 1977 at the Galapagos Rift. At about the same time, then-graduate student Colleen Cavanaugh proposed chemosynthetic bacteria that oxidize sulfides or elemental sulfur as a mechanism by which tube worms could survive near hydrothermal vents. Cavanaugh later managed to confirm that this was indeed the method by which the worms could thrive, and is generally credited with the discovery of chemosynthesis.
A 2004 television series hosted by Bill Nye named chemosynthesis as one of the 100 greatest scientific discoveries of all time.
Oceanic crust
In 2013, researchers reported their discovery of bacteria living in the rock of the oceanic crust below the thick layers of sediment, and apart from the hydrothermal vents that form along the edges of the tectonic plates. Preliminary findings are that these bacteria subsist on the hydrogen produced by chemical reduction of olivine by seawater circulating in the small veins that permeate the basalt that comprises oceanic crust. The bacteria synthesize methane by combining hydrogen and carbon dioxide.
Chemosynthesis as an innovative area for continuing research
Despite the fact that the process of chemosynthesis has been known for more than a hundred years, its significance and importance are still relevant today in the transformation of chemical elements in biogeochemical cycles. Today, the vital processes of nitrifying bacteria, which lead to the oxidation of ammonia to nitric acid, require scientific substantiation and additional research. The ability of bacteria to convert inorganic substances into organic ones suggests that chemosynthetics can accumulate valuable resources for human needs.
Chemosynthetic communities in different environments are important biological systems in terms of their ecology, evolution and biogeography, as well as their potential as indicators of the availability of permanent hydrocarbon- based energy sources. In the process of chemosynthesis, bacteria produce organic matter where photosynthesis is impossible. Isolation of thermophilic sulfate-reducing bacteria Thermodesulfovibrio yellowstonii and other types of chemosynthetics provides prospects for further research. Thus, the importance of chemosynthesis remains relevant for use in innovative technologies, conservation of ecosystems, human life in general. Sergey Winogradsky helped discover the phenomenon of chemosynthesis.
See also
Primary nutritional groups
Autotroph
Heterotroph
Photosynthesis
Movile Cave
References
External links
Chemosynthetic Communities in the Gulf of Mexico
Biological processes
Metabolism
Environmental microbiology
Ecosystems | 0.801868 | 0.993883 | 0.796963 |
Molecular modelling | Molecular modelling encompasses all methods, theoretical and computational, used to model or mimic the behaviour of molecules. The methods are used in the fields of computational chemistry, drug design, computational biology and materials science to study molecular systems ranging from small chemical systems to large biological molecules and material assemblies. The simplest calculations can be performed by hand, but inevitably computers are required to perform molecular modelling of any reasonably sized system. The common feature of molecular modelling methods is the atomistic level description of the molecular systems. This may include treating atoms as the smallest individual unit (a molecular mechanics approach), or explicitly modelling protons and neutrons with its quarks, anti-quarks and gluons and electrons with its photons (a quantum chemistry approach).
Molecular mechanics
Molecular mechanics is one aspect of molecular modelling, as it involves the use of classical mechanics (Newtonian mechanics) to describe the physical basis behind the models. Molecular models typically describe atoms (nucleus and electrons collectively) as point charges with an associated mass. The interactions between neighbouring atoms are described by spring-like interactions (representing chemical bonds) and Van der Waals forces. The Lennard-Jones potential is commonly used to describe the latter. The electrostatic interactions are computed based on Coulomb's law. Atoms are assigned coordinates in Cartesian space or in internal coordinates, and can also be assigned velocities in dynamical simulations. The atomic velocities are related to the temperature of the system, a macroscopic quantity. The collective mathematical expression is termed a potential function and is related to the system internal energy (U), a thermodynamic quantity equal to the sum of potential and kinetic energies. Methods which minimize the potential energy are termed energy minimization methods (e.g., steepest descent and conjugate gradient), while methods that model the behaviour of the system with propagation of time are termed molecular dynamics.
This function, referred to as a potential function, computes the molecular potential energy as a sum of energy terms that describe the deviation of bond lengths, bond angles and torsion angles away from equilibrium values, plus terms for non-bonded pairs of atoms describing van der Waals and electrostatic interactions. The set of parameters consisting of equilibrium bond lengths, bond angles, partial charge values, force constants and van der Waals parameters are collectively termed a force field. Different implementations of molecular mechanics use different mathematical expressions and different parameters for the potential function. The common force fields in use today have been developed by using chemical theory, experimental reference data, and high level quantum calculations. The method, termed energy minimization, is used to find positions of zero gradient for all atoms, in other words, a local energy minimum. Lower energy states are more stable and are commonly investigated because of their role in chemical and biological processes. A molecular dynamics simulation, on the other hand, computes the behaviour of a system as a function of time. It involves solving Newton's laws of motion, principally the second law, . Integration of Newton's laws of motion, using different integration algorithms, leads to atomic trajectories in space and time. The force on an atom is defined as the negative gradient of the potential energy function. The energy minimization method is useful to obtain a static picture for comparing between states of similar systems, while molecular dynamics provides information about the dynamic processes with the intrinsic inclusion of temperature effects.
Variables
Molecules can be modelled either in vacuum, or in the presence of a solvent such as water. Simulations of systems in vacuum are referred to as gas-phase simulations, while those that include the presence of solvent molecules are referred to as explicit solvent simulations. In another type of simulation, the effect of solvent is estimated using an empirical mathematical expression; these are termed implicit solvation simulations.
Coordinate representations
Most force fields are distance-dependent, making the most convenient expression for these Cartesian coordinates. Yet the comparatively rigid nature of bonds which occur between specific atoms, and in essence, defines what is meant by the designation molecule, make an internal coordinate system the most logical representation. In some fields the IC representation (bond length, angle between bonds, and twist angle of the bond as shown in the figure) is termed the Z-matrix or torsion angle representation. Unfortunately, continuous motions in Cartesian space often require discontinuous angular branches in internal coordinates, making it relatively hard to work with force fields in the internal coordinate representation, and conversely a simple displacement of an atom in Cartesian space may not be a straight line trajectory due to the prohibitions of the interconnected bonds. Thus, it is very common for computational optimizing programs to flip back and forth between representations during their iterations. This can dominate the calculation time of the potential itself and in long chain molecules introduce cumulative numerical inaccuracy. While all conversion algorithms produce mathematically identical results, they differ in speed and numerical accuracy. Currently, the fastest and most accurate torsion to Cartesian conversion is the Natural Extension Reference Frame (NERF) method.
Applications
Molecular modelling methods are used routinely to investigate the structure, dynamics, surface properties, and thermodynamics of inorganic, biological, and polymeric systems. A large number of molecular models of force field are today readily available in databases. The types of biological activity that have been investigated using molecular modelling include protein folding, enzyme catalysis, protein stability, conformational changes associated with biomolecular function, and molecular recognition of proteins, DNA, and membrane complexes.
See also
References
Further reading
Bioinformatics
Molecular biology
Computational chemistry | 0.810838 | 0.982482 | 0.796634 |
Force field (chemistry) | In the context of chemistry, molecular physics, physical chemistry, and molecular modelling, a force field is a computational model that is used to describe the forces between atoms (or collections of atoms) within molecules or between molecules as well as in crystals. Force fields are a variety of interatomic potentials. More precisely, the force field refers to the functional form and parameter sets used to calculate the potential energy of a system on the atomistic level. Force fields are usually used in molecular dynamics or Monte Carlo simulations. The parameters for a chosen energy function may be derived from classical laboratory experiment data, calculations in quantum mechanics, or both. Force fields utilize the same concept as force fields in classical physics, with the main difference being that the force field parameters in chemistry describe the energy landscape on the atomistic level. From a force field, the acting forces on every particle are derived as a gradient of the potential energy with respect to the particle coordinates.
A large number of different force field types exist today (e.g. for organic molecules, ions, polymers, minerals, and metals). Depending on the material, different functional forms are usually chosen for the force fields since different types of atomistic interactions dominate the material behavior.
There are various criteria that can be used for categorizing force field parametrization strategies. An important differentiation is 'component-specific' and 'transferable'. For a component-specific parametrization, the considered force field is developed solely for describing a single given substance (e.g. water). For a transferable force field, all or some parameters are designed as building blocks and become transferable/ applicable for different substances (e.g. methyl groups in alkane transferable force fields). A different important differentiation addresses the physical structure of the models: All-atom force fields provide parameters for every type of atom in a system, including hydrogen, while united-atom interatomic potentials treat the hydrogen and carbon atoms in methyl groups and methylene bridges as one interaction center. Coarse-grained potentials, which are often used in long-time simulations of macromolecules such as proteins, nucleic acids, and multi-component complexes, sacrifice chemical details for higher computing efficiency.
Force fields for molecular systems
The basic functional form of potential energy for modeling molecular systems includes intramolecular interaction terms for interactions of atoms that are linked by covalent bonds and intermolecular (i.e. nonbonded also termed noncovalent) terms that describe the long-range electrostatic and van der Waals forces. The specific decomposition of the terms depends on the force field, but a general form for the total energy in an additive force field can be written as
where the components of the covalent and noncovalent contributions are given by the following summations:
The bond and angle terms are usually modeled by quadratic energy functions that do not allow bond breaking. A more realistic description of a covalent bond at higher stretching is provided by the more expensive Morse potential. The functional form for dihedral energy is variable from one force field to another. Additional, "improper torsional" terms may be added to enforce the planarity of aromatic rings and other conjugated systems, and "cross-terms" that describe the coupling of different internal variables, such as angles and bond lengths. Some force fields also include explicit terms for hydrogen bonds.
The nonbonded terms are computationally most intensive. A popular choice is to limit interactions to pairwise energies. The van der Waals term is usually computed with a Lennard-Jones potential or the Mie potential and the electrostatic term with Coulomb's law. However, both can be buffered or scaled by a constant factor to account for electronic polarizability. A large number of force fields based on this or similar energy expressions have been proposed in the past decades for modeling different types of materials such as molecular substances, metals, glasses etc. - see below for a comprehensive list of force fields.
Bond stretching
As it is rare for bonds to deviate significantly from their equilibrium values, the most simplistic approaches utilize a Hooke's law formula:
where is the force constant, is the bond length, and is the value for the bond length between atoms and when all other terms in the force field are set to 0. The term is at times differently defined or taken at different thermodynamic conditions.
The bond stretching constant can be determined from the experimental infrared spectrum, Raman spectrum, or high-level quantum-mechanical calculations. The constant determines vibrational frequencies in molecular dynamics simulations. The stronger the bond is between atoms, the higher is the value of the force constant, and the higher the wavenumber (energy) in the IR/Raman spectrum.
Though the formula of Hooke's law provides a reasonable level of accuracy at bond lengths near the equilibrium distance, it is less accurate as one moves away. In order to model the Morse curve better one could employ cubic and higher powers. However, for most practical applications these differences are negligible, and inaccuracies in predictions of bond lengths are on the order of the thousandth of an angstrom, which is also the limit of reliability for common force fields. A Morse potential can be employed instead to enable bond breaking and higher accuracy, even though it is less efficient to compute. For reactive force fields, bond breaking and bond orders are additionally considered.
Electrostatic interactions
Electrostatic interactions are represented by a Coulomb energy, which utilizes atomic charges to represent chemical bonding ranging from covalent to polar covalent and ionic bonding. The typical formula is the Coulomb law:
where is the distance between two atoms and . The total Coulomb energy is a sum over all pairwise combinations of atoms and usually excludes .
Atomic charges can make dominant contributions to the potential energy, especially for polar molecules and ionic compounds, and are critical to simulate the geometry, interaction energy, and the reactivity. The assignment of charges usually uses some heuristic approach, with different possible solutions.
Force fields for crystal systems
Atomistic interactions in crystal systems significantly deviate from those in molecular systems, e.g. of organic molecules. For crystal systems, in particular multi-body interactions are important and cannot be neglected if a high accuracy of the force field is the aim. For crystal systems with covalent bonding, bond order potentials are usually used, e.g. Tersoff potentials. For metal systems, usually embedded atom potentials are used. For metals, also so-called Drude model potentials have been developed, which describe a form of attachment of electrons to nuclei.
Parameterization
In addition to the functional form of the potentials, a force fields consists of the parameters of these functions. Together, they specify the interactions on the atomistic level. The parametrization, i.e. determining of the parameter values, is crucial for the accuracy and reliability of the force field. Different parametrization procedures have been developed for the parametrization of different substances, e.g. metals, ions, and molecules. For different material types, usually different parametrization strategies are used. In general, two main types can be distinguished for the parametrization, either using data/ information from the atomistic level, e.g. from quantum mechanical calculations or spectroscopic data, or using data from macroscopic properties, e.g. the hardness or compressibility of a given material. Often a combination of these routes is used. Hence, one way or the other, the force field parameters are always determined in an empirical way. Nevertheless, the term 'empirical' is often used in the context of force field parameters when macroscopic material property data was used for the fitting. Experimental data (microscopic and macroscopic) included for the fit, for example, the enthalpy of vaporization, enthalpy of sublimation, dipole moments, and various spectroscopic properties such as vibrational frequencies. Often, for molecular systems, quantum mechanical calculations in the gas phase are used for parametrizing intramolecular interactions and parametrizing intermolecular dispersive interactions by using macroscopic properties such as liquid densities. The assignment of atomic charges often follows quantum mechanical protocols with some heuristics, which can lead to significant deviation in representing specific properties.
A large number of workflows and parametrization procedures have been employed in the past decades using different data and optimization strategies for determining the force field parameters. They differ significantly, which is also due to different focuses of different developments. The parameters for molecular simulations of biological macromolecules such as proteins, DNA, and RNA were often derived/ transferred from observations for small organic molecules, which are more accessible for experimental studies and quantum calculations.
Atom types are defined for different elements as well as for the same elements in sufficiently different chemical environments. For example, oxygen atoms in water and an oxygen atoms in a carbonyl functional group are classified as different force field types. Typical molecular force field parameter sets include values for atomic mass, atomic charge, Lennard-Jones parameters for every atom type, as well as equilibrium values of bond lengths, bond angles, and dihedral angles. The bonded terms refer to pairs, triplets, and quadruplets of bonded atoms, and include values for the effective spring constant for each potential.
Heuristic force field parametrization procedures have been very successfully for many year, but recently criticized. since they are usually not fully automated and therefore subject to some subjectivity of the developers, which also brings problems regarding the reproducibility of the parametrization procedure.
Efforts to provide open source codes and methods include openMM and openMD. The use of semi-automation or full automation, without input from chemical knowledge, is likely to increase inconsistencies at the level of atomic charges, for the assignment of remaining parameters, and likely to dilute the interpretability and performance of parameters.
Force field databases
A large number of force fields has been published in the past decades - mostly in scientific publications. In recent years, some databases have attempted to collect, categorize and make force fields digitally available. Therein, different databases, focus on different types of force fields. For example, the openKim database focuses on interatomic functions describing the individual interactions between specific elements. The TraPPE database focuses on transferable force fields of organic molecules (developed by the Siepmann group). The MolMod database focuses on molecular and ionic force fields (both component-specific and transferable).
Transferability and mixing function types
Functional forms and parameter sets have been defined by the developers of interatomic potentials and feature variable degrees of self-consistency and transferability. When functional forms of the potential terms vary or are mixed, the parameters from one interatomic potential function can typically not be used together with another interatomic potential function. In some cases, modifications can be made with minor effort, for example, between 9-6 Lennard-Jones potentials to 12-6 Lennard-Jones potentials. Transfers from Buckingham potentials to harmonic potentials, or from Embedded Atom Models to harmonic potentials, on the contrary, would require many additional assumptions and may not be possible.
In many cases, force fields can be straight forwardly combined. Yet, often, additional specifications and assumptions are required.
Limitations
All interatomic potentials are based on approximations and experimental data, therefore often termed empirical. The performance varies from higher accuracy than density functional theory (DFT) calculations, with access to million times larger systems and time scales, to random guesses depending on the force field. The use of accurate representations of chemical bonding, combined with reproducible experimental data and validation, can lead to lasting interatomic potentials of high quality with much fewer parameters and assumptions in comparison to DFT-level quantum methods.
Possible limitations include atomic charges, also called point charges. Most force fields rely on point charges to reproduce the electrostatic potential around molecules, which works less well for anisotropic charge distributions. The remedy is that point charges have a clear interpretation and virtual electrons can be added to capture essential features of the electronic structure, such additional polarizability in metallic systems to describe the image potential, internal multipole moments in π-conjugated systems, and lone pairs in water. Electronic polarization of the environment may be better included by using polarizable force fields or using a macroscopic dielectric constant. However, application of one value of dielectric constant is a coarse approximation in the highly heterogeneous environments of proteins, biological membranes, minerals, or electrolytes.
All types of van der Waals forces are also strongly environment-dependent because these forces originate from interactions of induced and "instantaneous" dipoles (see Intermolecular force). The original Fritz London theory of these forces applies only in a vacuum. A more general theory of van der Waals forces in condensed media was developed by A. D. McLachlan in 1963 and included the original London's approach as a special case. The McLachlan theory predicts that van der Waals attractions in media are weaker than in vacuum and follow the like dissolves like rule, which means that different types of atoms interact more weakly than identical types of atoms. This is in contrast to combinatorial rules or Slater-Kirkwood equation applied for development of the classical force fields. The combinatorial rules state that the interaction energy of two dissimilar atoms (e.g., C...N) is an average of the interaction energies of corresponding identical atom pairs (i.e., C...C and N...N). According to McLachlan's theory, the interactions of particles in media can even be fully repulsive, as observed for liquid helium, however, the lack of vaporization and presence of a freezing point contradicts a theory of purely repulsive interactions. Measurements of attractive forces between different materials (Hamaker constant) have been explained by Jacob Israelachvili. For example, "the interaction between hydrocarbons across water is about 10% of that across vacuum". Such effects are represented in molecular dynamics through pairwise interactions that are spatially more dense in the condensed phase relative to the gas phase and reproduced once the parameters for all phases are validated to reproduce chemical bonding, density, and cohesive/surface energy.
Limitations have been strongly felt in protein structure refinement. The major underlying challenge is the huge conformation space of polymeric molecules, which grows beyond current computational feasibility when containing more than ~20 monomers. Participants in Critical Assessment of protein Structure Prediction (CASP) did not try to refine their models to avoid "a central embarrassment of molecular mechanics, namely that energy minimization or molecular dynamics generally leads to a model that is less like the experimental structure". Force fields have been applied successfully for protein structure refinement in different X-ray crystallography and NMR spectroscopy applications, especially using program XPLOR. However, the refinement is driven mainly by a set of experimental constraints and the interatomic potentials serve mainly to remove interatomic hindrances. The results of calculations were practically the same with rigid sphere potentials implemented in program DYANA (calculations from NMR data), or with programs for crystallographic refinement that use no energy functions at all. These shortcomings are related to interatomic potentials and to the inability to sample the conformation space of large molecules effectively. Thereby also the development of parameters to tackle such large-scale problems requires new approaches. A specific problem area is homology modeling of proteins. Meanwhile, alternative empirical scoring functions have been developed for ligand docking, protein folding, homology model refinement, computational protein design, and modeling of proteins in membranes.
It was also argued that some protein force fields operate with energies that are irrelevant to protein folding or ligand binding. The parameters of proteins force fields reproduce the enthalpy of sublimation, i.e., energy of evaporation of molecular crystals. However, protein folding and ligand binding are thermodynamically closer to crystallization, or liquid-solid transitions as these processes represent freezing of mobile molecules in condensed media. Thus, free energy changes during protein folding or ligand binding are expected to represent a combination of an energy similar to heat of fusion (energy absorbed during melting of molecular crystals), a conformational entropy contribution, and solvation free energy. The heat of fusion is significantly smaller than enthalpy of sublimation. Hence, the potentials describing protein folding or ligand binding need more consistent parameterization protocols, e.g., as described for IFF. Indeed, the energies of H-bonds in proteins are ~ -1.5 kcal/mol when estimated from protein engineering or alpha helix to coil transition data, but the same energies estimated from sublimation enthalpy of molecular crystals were -4 to -6 kcal/mol, which is related to re-forming existing hydrogen bonds and not forming hydrogen bonds from scratch. The depths of modified Lennard-Jones potentials derived from protein engineering data were also smaller than in typical potential parameters and followed the like dissolves like rule, as predicted by McLachlan theory.
Force fields available in literature
Different force fields are designed for different purposes:
Classical
AMBER (Assisted Model Building and Energy Refinement) – widely used for proteins and DNA.
CFF (Consistent Force Field) – a family of force fields adapted to a broad variety of organic compounds, includes force fields for polymers, metals, etc. CFF was developed by Arieh Warshel, Lifson, and coworkers as a general method for unifying studies of energies, structures, and vibration of general molecules and molecular crystals. The CFF program, developed by Levitt and Warshel, is based on the Cartesian representation of all the atoms, and it served as the basis for many subsequent simulation programs.
CHARMM (Chemistry at HARvard Molecular Mechanics) – originally developed at Harvard, widely used for both small molecules and macromolecules
COSMOS-NMR – hybrid QM/MM force field adapted to various inorganic compounds, organic compounds, and biological macromolecules, including semi-empirical calculation of atomic charges NMR properties. COSMOS-NMR is optimized for NMR-based structure elucidation and implemented in COSMOS molecular modelling package.
CVFF – also used broadly for small molecules and macromolecules.
ECEPP – first force field for polypeptide molecules - developed by F.A. Momany, H.A. Scheraga and colleagues. ECEPP was developed specifically for the modeling of peptides and proteins. It uses fixed geometries of amino acid residues to simplify the potential energy surface. Thus, the energy minimization is conducted in the space of protein torsion angles. Both MM2 and ECEPP include potentials for H-bonds and torsion potentials for describing rotations around single bonds. ECEPP/3 was implemented (with some modifications) in Internal Coordinate Mechanics and FANTOM.
GROMOS (GROningen MOlecular Simulation) – a force field that comes as part of the GROMOS software, a general-purpose molecular dynamics computer simulation package for the study of biomolecular systems. GROMOS force field A-version has been developed for application to aqueous or apolar solutions of proteins, nucleotides, and sugars. A B-version to simulate gas phase isolated molecules is also available.
IFF (Interface Force Field) – covers metals, minerals, 2D materials, and polymers. It uses 12-6 LJ and 9-6 LJ interactions. IFF was developed as for compounds across the periodic table. It assigs consistent charges, utilizes standard conditions as a reference state, reproduces structures, energies, and energy derivatives, and quantifies limitations for all included compounds. The Interface force field (IFF) assumes one single energy expression for all compounds across the periodic (with 9-6 and 12-6 LJ options). The IFF is in most parts non-polarizable, but also comprises polarizable parts, e.g. for some metals (Au, W) and pi-conjugated molecules
MMFF (Merck Molecular Force Field) – developed at Merck for a broad range of molecules.
MM2 was developed by Norman Allinger mainly for conformational analysis of hydrocarbons and other small organic molecules. It is designed to reproduce the equilibrium covalent geometry of molecules as precisely as possible. It implements a large set of parameters that is continuously refined and updated for many different classes of organic compounds (MM3 and MM4).
OPLS (Optimized Potential for Liquid Simulations) (variants include OPLS-AA, OPLS-UA, OPLS-2001, OPLS-2005, OPLS3e, OPLS4) – developed by William L. Jorgensen at the Yale University Department of Chemistry.
QCFF/PI – A general force fields for conjugated molecules.
UFF (Universal Force Field) – A general force field with parameters for the full periodic table up to and including the actinoids, developed at Colorado State University. The reliability is known to be poor due to lack of validation and interpretation of the parameters for nearly all claimed compounds, especially metals and inorganic compounds.
Polarizable
Several force fields explicitly capture polarizability, where a particle's effective charge can be influenced by electrostatic interactions with its neighbors. Core-shell models are common, which consist of a positively charged core particle, representing the polarizable atom, and a negatively charged particle attached to the core atom through a spring-like harmonic oscillator potential. Recent examples include polarizable models with virtual electrons that reproduce image charges in metals and polarizable biomolecular force fields.
AMBER – polarizable force field developed by Jim Caldwell and coworkers.
AMOEBA (Atomic Multipole Optimized Energetics for Biomolecular Applications) – force field developed by Pengyu Ren (University of Texas at Austin) and Jay W. Ponder (Washington University). AMOEBA force field is gradually moving to more physics-rich AMOEBA+.
CHARMM – polarizable force field developed by S. Patel (University of Delaware) and C. L. Brooks III (University of Michigan). Based on the classical Drude oscillator developed by Alexander MacKerell (University of Maryland, Baltimore) and Benoit Roux (University of Chicago).
CFF/ind and ENZYMIX – The first polarizable force field which has subsequently been used in many applications to biological systems.
COSMOS-NMR (Computer Simulation of Molecular Structure) – developed by Ulrich Sternberg and coworkers. Hybrid QM/MM force field enables explicit quantum-mechanical calculation of electrostatic properties using localized bond orbitals with fast BPT formalism. Atomic charge fluctuation is possible in each molecular dynamics step.
DRF90 – developed by P. Th. van Duijnen and coworkers.
NEMO (Non-Empirical Molecular Orbital) – procedure developed by Gunnar Karlström and coworkers at Lund University (Sweden)
PIPF – The polarizable intermolecular potential for fluids is an induced point-dipole force field for organic liquids and biopolymers. The molecular polarization is based on Thole's interacting dipole (TID) model and was developed by Jiali Gao Gao Research Group | at the University of Minnesota.
Polarizable Force Field (PFF) – developed by Richard A. Friesner and coworkers.
SP-basis Chemical Potential Equalization (CPE) – approach developed by R. Chelli and P. Procacci.
PHAST – polarizable potential developed by Chris Cioce and coworkers.
ORIENT – procedure developed by Anthony J. Stone (Cambridge University) and coworkers.
Gaussian Electrostatic Model (GEM) – a polarizable force field based on Density Fitting developed by Thomas A. Darden and G. Andrés Cisneros at NIEHS; and Jean-Philip Piquemal at Paris VI University.
Atomistic Polarizable Potential for Liquids, Electrolytes, and Polymers(APPLE&P), developed by Oleg Borogin, Dmitry Bedrov and coworkers, which is distributed by Wasatch Molecular Incorporated.
Polarizable procedure based on the Kim-Gordon approach developed by Jürg Hutter and coworkers (University of Zürich)
GFN-FF (Geometry, Frequency, and Noncovalent Interaction Force-Field) – a completely automated partially polarizable generic force-field for the accurate description of structures and dynamics of large molecules across the periodic table developed by Stefan Grimme and Sebastian Spicher at the University of Bonn.
WASABe v1.0 PFF (for Water, orgAnic Solvents, And Battery electrolytes) An isotropic atomic dipole polarizable force field for accurate description of battery electrolytes in terms of thermodynamic and dynamic properties for high lithium salt concentrations in sulfonate solvent by Oleg Starovoytov
XED (eXtended Electron Distribution) - a polarizable force-field created as a modification of an atom-centered charge model, developed by Andy Vinter. Partially charged monopoles are placed surrounding atoms to simulate more geometrically accurate electrostatic potentials at a fraction of the expense of using quantum mechanical methods. Primarily used by software packages supplied by Cresset Biomolecular Discovery.
Reactive
EVB (Empirical valence bond) – reactive force field introduced by Warshel and coworkers for use in modeling chemical reactions in different environments. The EVB facilitates calculating activation free energies in condensed phases and in enzymes.
ReaxFF – reactive force field (interatomic potential) developed by Adri van Duin, William Goddard and coworkers. It is slower than classical MD (50x), needs parameter sets with specific validation, and has no validation for surface and interfacial energies. Parameters are non-interpretable. It can be used atomistic-scale dynamical simulations of chemical reactions. Parallelized ReaxFF allows reactive simulations on >>1,000,000 atoms on large supercomputers.
Coarse-grained
DPD (Dissipative particle dynamics) – This is a method commonly applied in chemical engineering. It is typically used for studying the hydrodynamics of various simple and complex fluids which require consideration of time and length scales larger than those accessible to classical Molecular dynamics. The potential was originally proposed by Hoogerbrugge and Koelman with later modifications by Español and Warren The current state of the art was well documented in a CECAM workshop in 2008. Recently, work has been undertaken to capture some of the chemical subtitles relevant to solutions. This has led to work considering automated parameterisation of the DPD interaction potentials against experimental observables.
MARTINI – a coarse-grained potential developed by Marrink and coworkers at the University of Groningen, initially developed for molecular dynamics simulations of lipids, later extended to various other molecules. The force field applies a mapping of four heavy atoms to one CG interaction site and is parameterized with the aim of reproducing thermodynamic properties.
SAFT – A top-down coarse-grained model developed in the Molecular Systems Engineering group at Imperial College London fitted to liquid phase densities and vapor pressures of pure compounds by using the SAFT equation of state.
SIRAH – a coarse-grained force field developed by Pantano and coworkers of the Biomolecular Simulations Group, Institut Pasteur of Montevideo, Uruguay; developed for molecular dynamics of water, DNA, and proteins. Free available for AMBER and GROMACS packages.
VAMM (Virtual atom molecular mechanics) – a coarse-grained force field developed by Korkut and Hendrickson for molecular mechanics calculations such as large scale conformational transitions based on the virtual interactions of C-alpha atoms. It is a knowledge based force field and formulated to capture features dependent on secondary structure and on residue-specific contact information in proteins.
Machine learning
MACE (Multi Atomic Cluster Expansion) is a highly accurate machine learning force field architecture that combines the rigorous many-body expansion of the total potential energy with rotationally equivariant representations of the system.
ANI (Artificial Narrow Intelligence) is a transferable neural network potential, built from atomic environment vectors, and able to provide DFT accuracy in terms of energies.
FFLUX (originally QCTFF) A set of trained Kriging models which operate together to provide a molecular force field trained on Atoms in molecules or Quantum chemical topology energy terms including electrostatic, exchange and electron correlation.
TensorMol, a mixed model, a neural network provides a short-range potential, whilst more traditional potentials add screened long-range terms.
Δ-ML not a force field method but a model that adds learnt correctional energy terms to approximate and relatively computationally cheap quantum chemical methods in order to provide an accuracy level of a higher order, more computationally expensive quantum chemical model.
SchNet a Neural network utilising continuous-filter convolutional layers, to predict chemical properties and potential energy surfaces.
PhysNet is a Neural Network-based energy function to predict energies, forces and (fluctuating) partial charges.
Water
The set of parameters used to model water or aqueous solutions (basically a force field for water) is called a water model. Many water models have been proposed; some examples are TIP3P, TIP4P, SPC, flexible simple point charge water model (flexible SPC), ST2, and mW. Other solvents and methods of solvent representation are also applied within computational chemistry and physics; these are termed solvent models.
Modified amino acids
Forcefield_PTM – An AMBER-based forcefield and webtool for modeling common post-translational modifications of amino acids in proteins developed by Chris Floudas and coworkers. It uses the ff03 charge model and has several side-chain torsion corrections parameterized to match the quantum chemical rotational surface.
Forcefield_NCAA - An AMBER-based forcefield and webtool for modeling common non-natural amino acids in proteins in condensed-phase simulations using the ff03 charge model. The charges have been reported to be correlated with hydration free energies of corresponding side-chain analogs.
Other
LFMM (Ligand Field Molecular Mechanics) - functions for the coordination sphere around transition metals based on the angular overlap model (AOM). Implemented in the Molecular Operating Environment (MOE) as DommiMOE and in Tinker
VALBOND - a function for angle bending that is based on valence bond theory and works for large angular distortions, hypervalent molecules, and transition metal complexes. It can be incorporated into other force fields such as CHARMM and UFF.
See also
References
Further reading
Intermolecular forces
Molecular physics
Molecular modelling | 0.805593 | 0.988667 | 0.796462 |
Mathematical model | A mathematical model is an abstract description of a concrete system using mathematical concepts and language. The process of developing a mathematical model is termed mathematical modeling. Mathematical models are used in applied mathematics and in the natural sciences (such as physics, biology, earth science, chemistry) and engineering disciplines (such as computer science, electrical engineering), as well as in non-physical systems such as the social sciences (such as economics, psychology, sociology, political science). It can also be taught as a subject in its own right.
The use of mathematical models to solve problems in business or military operations is a large part of the field of operations research. Mathematical models are also used in music, linguistics, and
philosophy (for example, intensively in analytic philosophy). A model may help to explain a system and to study the effects of different components, and to make predictions about behavior.
Elements of a mathematical model
Mathematical models can take many forms, including dynamical systems, statistical models, differential equations, or game theoretic models. These and other types of models can overlap, with a given model involving a variety of abstract structures. In general, mathematical models may include logical models. In many cases, the quality of a scientific field depends on how well the mathematical models developed on the theoretical side agree with results of repeatable experiments. Lack of agreement between theoretical mathematical models and experimental measurements often leads to important advances as better theories are developed. In the physical sciences, a traditional mathematical model contains most of the following elements:
Governing equations
Supplementary sub-models
Defining equations
Constitutive equations
Assumptions and constraints
Initial and boundary conditions
Classical constraints and kinematic equations
Classifications
Mathematical models are of different types:
Linear vs. nonlinear. If all the operators in a mathematical model exhibit linearity, the resulting mathematical model is defined as linear. A model is considered to be nonlinear otherwise. The definition of linearity and nonlinearity is dependent on context, and linear models may have nonlinear expressions in them. For example, in a statistical linear model, it is assumed that a relationship is linear in the parameters, but it may be nonlinear in the predictor variables. Similarly, a differential equation is said to be linear if it can be written with linear differential operators, but it can still have nonlinear expressions in it. In a mathematical programming model, if the objective functions and constraints are represented entirely by linear equations, then the model is regarded as a linear model. If one or more of the objective functions or constraints are represented with a nonlinear equation, then the model is known as a nonlinear model.Linear structure implies that a problem can be decomposed into simpler parts that can be treated independently and/or analyzed at a different scale and the results obtained will remain valid for the initial problem when recomposed and rescaled.Nonlinearity, even in fairly simple systems, is often associated with phenomena such as chaos and irreversibility. Although there are exceptions, nonlinear systems and models tend to be more difficult to study than linear ones. A common approach to nonlinear problems is linearization, but this can be problematic if one is trying to study aspects such as irreversibility, which are strongly tied to nonlinearity.
Static vs. dynamic. A dynamic model accounts for time-dependent changes in the state of the system, while a static (or steady-state) model calculates the system in equilibrium, and thus is time-invariant. Dynamic models typically are represented by differential equations or difference equations.
Explicit vs. implicit. If all of the input parameters of the overall model are known, and the output parameters can be calculated by a finite series of computations, the model is said to be explicit. But sometimes it is the output parameters which are known, and the corresponding inputs must be solved for by an iterative procedure, such as Newton's method or Broyden's method. In such a case the model is said to be implicit. For example, a jet engine's physical properties such as turbine and nozzle throat areas can be explicitly calculated given a design thermodynamic cycle (air and fuel flow rates, pressures, and temperatures) at a specific flight condition and power setting, but the engine's operating cycles at other flight conditions and power settings cannot be explicitly calculated from the constant physical properties.
Discrete vs. continuous. A discrete model treats objects as discrete, such as the particles in a molecular model or the states in a statistical model; while a continuous model represents the objects in a continuous manner, such as the velocity field of fluid in pipe flows, temperatures and stresses in a solid, and electric field that applies continuously over the entire model due to a point charge.
Deterministic vs. probabilistic (stochastic). A deterministic model is one in which every set of variable states is uniquely determined by parameters in the model and by sets of previous states of these variables; therefore, a deterministic model always performs the same way for a given set of initial conditions. Conversely, in a stochastic model—usually called a "statistical model"—randomness is present, and variable states are not described by unique values, but rather by probability distributions.
Deductive, inductive, or floating. A is a logical structure based on a theory. An inductive model arises from empirical findings and generalization from them. The floating model rests on neither theory nor observation, but is merely the invocation of expected structure. Application of mathematics in social sciences outside of economics has been criticized for unfounded models. Application of catastrophe theory in science has been characterized as a floating model.
Strategic vs. non-strategic. Models used in game theory are different in a sense that they model agents with incompatible incentives, such as competing species or bidders in an auction. Strategic models assume that players are autonomous decision makers who rationally choose actions that maximize their objective function. A key challenge of using strategic models is defining and computing solution concepts such as Nash equilibrium. An interesting property of strategic models is that they separate reasoning about rules of the game from reasoning about behavior of the players.
Construction
In business and engineering, mathematical models may be used to maximize a certain output. The system under consideration will require certain inputs. The system relating inputs to outputs depends on other variables too: decision variables, state variables, exogenous variables, and random variables. Decision variables are sometimes known as independent variables. Exogenous variables are sometimes known as parameters or constants. The variables are not independent of each other as the state variables are dependent on the decision, input, random, and exogenous variables. Furthermore, the output variables are dependent on the state of the system (represented by the state variables).
Objectives and constraints of the system and its users can be represented as functions of the output variables or state variables. The objective functions will depend on the perspective of the model's user. Depending on the context, an objective function is also known as an index of performance, as it is some measure of interest to the user. Although there is no limit to the number of objective functions and constraints a model can have, using or optimizing the model becomes more involved (computationally) as the number increases. For example, economists often apply linear algebra when using input–output models. Complicated mathematical models that have many variables may be consolidated by use of vectors where one symbol represents several variables.
A priori information
Mathematical modeling problems are often classified into black box or white box models, according to how much a priori information on the system is available. A black-box model is a system of which there is no a priori information available. A white-box model (also called glass box or clear box) is a system where all necessary information is available. Practically all systems are somewhere between the black-box and white-box models, so this concept is useful only as an intuitive guide for deciding which approach to take.
Usually, it is preferable to use as much a priori information as possible to make the model more accurate. Therefore, the white-box models are usually considered easier, because if you have used the information correctly, then the model will behave correctly. Often the a priori information comes in forms of knowing the type of functions relating different variables. For example, if we make a model of how a medicine works in a human system, we know that usually the amount of medicine in the blood is an exponentially decaying function, but we are still left with several unknown parameters; how rapidly does the medicine amount decay, and what is the initial amount of medicine in blood? This example is therefore not a completely white-box model. These parameters have to be estimated through some means before one can use the model.
In black-box models, one tries to estimate both the functional form of relations between variables and the numerical parameters in those functions. Using a priori information we could end up, for example, with a set of functions that probably could describe the system adequately. If there is no a priori information we would try to use functions as general as possible to cover all different models. An often used approach for black-box models are neural networks which usually do not make assumptions about incoming data. Alternatively, the NARMAX (Nonlinear AutoRegressive Moving Average model with eXogenous inputs) algorithms which were developed as part of nonlinear system identification can be used to select the model terms, determine the model structure, and estimate the unknown parameters in the presence of correlated and nonlinear noise. The advantage of NARMAX models compared to neural networks is that NARMAX produces models that can be written down and related to the underlying process, whereas neural networks produce an approximation that is opaque.
Subjective information
Sometimes it is useful to incorporate subjective information into a mathematical model. This can be done based on intuition, experience, or expert opinion, or based on convenience of mathematical form. Bayesian statistics provides a theoretical framework for incorporating such subjectivity into a rigorous analysis: we specify a prior probability distribution (which can be subjective), and then update this distribution based on empirical data.
An example of when such approach would be necessary is a situation in which an experimenter bends a coin slightly and tosses it once, recording whether it comes up heads, and is then given the task of predicting the probability that the next flip comes up heads. After bending the coin, the true probability that the coin will come up heads is unknown; so the experimenter would need to make a decision (perhaps by looking at the shape of the coin) about what prior distribution to use. Incorporation of such subjective information might be important to get an accurate estimate of the probability.
Complexity
In general, model complexity involves a trade-off between simplicity and accuracy of the model. Occam's razor is a principle particularly relevant to modeling, its essential idea being that among models with roughly equal predictive power, the simplest one is the most desirable. While added complexity usually improves the realism of a model, it can make the model difficult to understand and analyze, and can also pose computational problems, including numerical instability. Thomas Kuhn argues that as science progresses, explanations tend to become more complex before a paradigm shift offers radical simplification.
For example, when modeling the flight of an aircraft, we could embed each mechanical part of the aircraft into our model and would thus acquire an almost white-box model of the system. However, the computational cost of adding such a huge amount of detail would effectively inhibit the usage of such a model. Additionally, the uncertainty would increase due to an overly complex system, because each separate part induces some amount of variance into the model. It is therefore usually appropriate to make some approximations to reduce the model to a sensible size. Engineers often can accept some approximations in order to get a more robust and simple model. For example, Newton's classical mechanics is an approximated model of the real world. Still, Newton's model is quite sufficient for most ordinary-life situations, that is, as long as particle speeds are well below the speed of light, and we study macro-particles only. Note that better accuracy does not necessarily mean a better model. Statistical models are prone to overfitting which means that a model is fitted to data too much and it has lost its ability to generalize to new events that were not observed before.
Training, tuning, and fitting
Any model which is not pure white-box contains some parameters that can be used to fit the model to the system it is intended to describe. If the modeling is done by an artificial neural network or other machine learning, the optimization of parameters is called training, while the optimization of model hyperparameters is called tuning and often uses cross-validation. In more conventional modeling through explicitly given mathematical functions, parameters are often determined by curve fitting.
Evaluation and assessment
A crucial part of the modeling process is the evaluation of whether or not a given mathematical model describes a system accurately. This question can be difficult to answer as it involves several different types of evaluation.
Prediction of empirical data
Usually, the easiest part of model evaluation is checking whether a model predicts experimental measurements or other empirical data not used in the model development. In models with parameters, a common approach is to split the data into two disjoint subsets: training data and verification data. The training data are used to estimate the model parameters. An accurate model will closely match the verification data even though these data were not used to set the model's parameters. This practice is referred to as cross-validation in statistics.
Defining a metric to measure distances between observed and predicted data is a useful tool for assessing model fit. In statistics, decision theory, and some economic models, a loss function plays a similar role. While it is rather straightforward to test the appropriateness of parameters, it can be more difficult to test the validity of the general mathematical form of a model. In general, more mathematical tools have been developed to test the fit of statistical models than models involving differential equations. Tools from nonparametric statistics can sometimes be used to evaluate how well the data fit a known distribution or to come up with a general model that makes only minimal assumptions about the model's mathematical form.
Scope of the model
Assessing the scope of a model, that is, determining what situations the model is applicable to, can be less straightforward. If the model was constructed based on a set of data, one must determine for which systems or situations the known data is a "typical" set of data. The question of whether the model describes well the properties of the system between data points is called interpolation, and the same question for events or data points outside the observed data is called extrapolation.
As an example of the typical limitations of the scope of a model, in evaluating Newtonian classical mechanics, we can note that Newton made his measurements without advanced equipment, so he could not measure properties of particles traveling at speeds close to the speed of light. Likewise, he did not measure the movements of molecules and other small particles, but macro particles only. It is then not surprising that his model does not extrapolate well into these domains, even though his model is quite sufficient for ordinary life physics.
Philosophical considerations
Many types of modeling implicitly involve claims about causality. This is usually (but not always) true of models involving differential equations. As the purpose of modeling is to increase our understanding of the world, the validity of a model rests not only on its fit to empirical observations, but also on its ability to extrapolate to situations or data beyond those originally described in the model. One can think of this as the differentiation between qualitative and quantitative predictions. One can also argue that a model is worthless unless it provides some insight which goes beyond what is already known from direct investigation of the phenomenon being studied.
An example of such criticism is the argument that the mathematical models of optimal foraging theory do not offer insight that goes beyond the common-sense conclusions of evolution and other basic principles of ecology. It should also be noted that while mathematical modeling uses mathematical concepts and language, it is not itself a branch of mathematics and does not necessarily conform to any mathematical logic, but is typically a branch of some science or other technical subject, with corresponding concepts and standards of argumentation.
Significance in the natural sciences
Mathematical models are of great importance in the natural sciences, particularly in physics. Physical theories are almost invariably expressed using mathematical models. Throughout history, more and more accurate mathematical models have been developed. Newton's laws accurately describe many everyday phenomena, but at certain limits theory of relativity and quantum mechanics must be used.
It is common to use idealized models in physics to simplify things. Massless ropes, point particles, ideal gases and the particle in a box are among the many simplified models used in physics. The laws of physics are represented with simple equations such as Newton's laws, Maxwell's equations and the Schrödinger equation. These laws are a basis for making mathematical models of real situations. Many real situations are very complex and thus modeled approximately on a computer, a model that is computationally feasible to compute is made from the basic laws or from approximate models made from the basic laws. For example, molecules can be modeled by molecular orbital models that are approximate solutions to the Schrödinger equation. In engineering, physics models are often made by mathematical methods such as finite element analysis.
Different mathematical models use different geometries that are not necessarily accurate descriptions of the geometry of the universe. Euclidean geometry is much used in classical physics, while special relativity and general relativity are examples of theories that use geometries which are not Euclidean.
Some applications
Often when engineers analyze a system to be controlled or optimized, they use a mathematical model. In analysis, engineers can build a descriptive model of the system as a hypothesis of how the system could work, or try to estimate how an unforeseeable event could affect the system. Similarly, in control of a system, engineers can try out different control approaches in simulations.
A mathematical model usually describes a system by a set of variables and a set of equations that establish relationships between the variables. Variables may be of many types; real or integer numbers, Boolean values or strings, for example. The variables represent some properties of the system, for example, the measured system outputs often in the form of signals, timing data, counters, and event occurrence. The actual model is the set of functions that describe the relations between the different variables.
Examples
One of the popular examples in computer science is the mathematical models of various machines, an example is the deterministic finite automaton (DFA) which is defined as an abstract mathematical concept, but due to the deterministic nature of a DFA, it is implementable in hardware and software for solving various specific problems. For example, the following is a DFA M with a binary alphabet, which requires that the input contains an even number of 0s:
where
and
is defined by the following state-transition table:
{| border="1"
| || ||
|-
|S1 || ||
|-
|S''2 || ||
|}
The state represents that there has been an even number of 0s in the input so far, while signifies an odd number. A 1 in the input does not change the state of the automaton. When the input ends, the state will show whether the input contained an even number of 0s or not. If the input did contain an even number of 0s, will finish in state an accepting state, so the input string will be accepted.
The language recognized by is the regular language given by the regular expression 1*( 0 (1*) 0 (1*) )*, where "*" is the Kleene star, e.g., 1* denotes any non-negative number (possibly zero) of symbols "1".
Many everyday activities carried out without a thought are uses of mathematical models. A geographical map projection of a region of the earth onto a small, plane surface is a model which can be used for many purposes such as planning travel.
Another simple activity is predicting the position of a vehicle from its initial position, direction and speed of travel, using the equation that distance traveled is the product of time and speed. This is known as dead reckoning when used more formally. Mathematical modeling in this way does not necessarily require formal mathematics; animals have been shown to use dead reckoning.
Population Growth. A simple (though approximate) model of population growth is the Malthusian growth model. A slightly more realistic and largely used population growth model is the logistic function, and its extensions.
Model of a particle in a potential-field. In this model we consider a particle as being a point of mass which describes a trajectory in space which is modeled by a function giving its coordinates in space as a function of time. The potential field is given by a function and the trajectory, that is a function is the solution of the differential equation: that can be written also as
Note this model assumes the particle is a point mass, which is certainly known to be false in many cases in which we use this model; for example, as a model of planetary motion.
Model of rational behavior for a consumer. In this model we assume a consumer faces a choice of commodities labeled each with a market price The consumer is assumed to have an ordinal utility function (ordinal in the sense that only the sign of the differences between two utilities, and not the level of each utility, is meaningful), depending on the amounts of commodities consumed. The model further assumes that the consumer has a budget which is used to purchase a vector in such a way as to maximize The problem of rational behavior in this model then becomes a mathematical optimization problem, that is: subject to: This model has been used in a wide variety of economic contexts, such as in general equilibrium theory to show existence and Pareto efficiency of economic equilibria.
Neighbour-sensing model is a model that explains the mushroom formation from the initially chaotic fungal network.
In computer science, mathematical models may be used to simulate computer networks.
In mechanics, mathematical models may be used to analyze the movement of a rocket model.
See also
Agent-based model
All models are wrong
Cliodynamics
Computer simulation
Conceptual model
Decision engineering
Grey box model
International Mathematical Modeling Challenge
Mathematical biology
Mathematical diagram
Mathematical economics
Mathematical modelling of infectious disease
Mathematical finance
Mathematical psychology
Mathematical sociology
Microscale and macroscale models
Model inversion
Resilience (mathematics)
Scientific model
Sensitivity analysis
Statistical model
Surrogate model
System identification
References
Further reading
Books
Aris, Rutherford [ 1978 ] ( 1994 ). Mathematical Modelling Techniques, New York: Dover.
Bender, E.A. [ 1978 ] ( 2000 ). An Introduction to Mathematical Modeling, New York: Dover.
Gary Chartrand (1977) Graphs as Mathematical Models, Prindle, Webber & Schmidt
Dubois, G. (2018) "Modeling and Simulation", Taylor & Francis, CRC Press.
Gershenfeld, N. (1998) The Nature of Mathematical Modeling, Cambridge University Press .
Lin, C.C. & Segel, L.A. ( 1988 ). Mathematics Applied to Deterministic Problems in the Natural Sciences, Philadelphia: SIAM.
Specific applications
Papadimitriou, Fivos. (2010). Mathematical Modelling of Spatial-Ecological Complex Systems: an Evaluation. Geography, Environment, Sustainability 1(3), 67-80.
An Introduction to Infectious Disease Modelling by Emilia Vynnycky and Richard G White.
External links
General reference
Patrone, F. Introduction to modeling via differential equations, with critical remarks.
Plus teacher and student package: Mathematical Modelling. Brings together all articles on mathematical modeling from Plus Magazine'', the online mathematics magazine produced by the Millennium Mathematics Project at the University of Cambridge.
Philosophical
Frigg, R. and S. Hartmann, Models in Science, in: The Stanford Encyclopedia of Philosophy, (Spring 2006 Edition)
Griffiths, E. C. (2010) What is a model?
Applied mathematics
Conceptual modelling
Knowledge representation
Mathematical terminology
Mathematical and quantitative methods (economics) | 0.798592 | 0.99726 | 0.796404 |
Inorganic chemistry | Inorganic chemistry deals with synthesis and behavior of inorganic and organometallic compounds. This field covers chemical compounds that are not carbon-based, which are the subjects of organic chemistry. The distinction between the two disciplines is far from absolute, as there is much overlap in the subdiscipline of organometallic chemistry. It has applications in every aspect of the chemical industry, including catalysis, materials science, pigments, surfactants, coatings, medications, fuels, and agriculture.
Occurrence
Many inorganic compounds are found in nature as minerals. Soil may contain iron sulfide as pyrite or calcium sulfate as gypsum. Inorganic compounds are also found multitasking as biomolecules: as electrolytes (sodium chloride), in energy storage (ATP) or in construction (the polyphosphate backbone in DNA).
Bonding
Inorganic compounds exhibit a range of bonding properties. Some are ionic compounds, consisting of very simple cations and anions joined by ionic bonding. Examples of salts (which are ionic compounds) are magnesium chloride MgCl2, which consists of magnesium cations Mg2+ and chloride anions Cl−; or sodium hydroxide NaOH, which consists of sodium cations Na+ and hydroxide anions OH−. Some inorganic compounds are highly covalent, such as sulfur dioxide and iron pentacarbonyl. Many inorganic compounds feature polar covalent bonding, which is a form of bonding intermediate between covalent and ionic bonding. This description applies to many oxides, carbonates, and halides. Many inorganic compounds are characterized by high melting points. Some salts (e.g., NaCl) are very soluble in water.
When one reactant contains hydrogen atoms, a reaction can take place by exchanging protons in acid-base chemistry. In a more general definition, any chemical species capable of binding to electron pairs is called a Lewis acid; conversely any molecule that tends to donate an electron pair is referred to as a Lewis base. As a refinement of acid-base interactions, the HSAB theory takes into account polarizability and size of ions.
Subdivisions of inorganic chemistry
Subdivisions of inorganic chemistry are numerous, but include:
organometallic chemistry, compounds with metal-carbon bonds. This area touches on organic synthesis, which employs many organometallic catalysts and reagents.
cluster chemistry, compounds with several metals bound together with metal-metal bonds or bridging ligands.
bioinorganic chemistry, biomolecules that contain metals. This area touches on medicinal chemistry.
materials chemistry and solid state chemistry, extended (i.e. polymeric) solids exhibiting properties not seen for simple molecules. Many practical themes are associated with these areas, including ceramics.
Industrial inorganic chemistry
Inorganic chemistry is a highly practical area of science. Traditionally, the scale of a nation's economy could be evaluated by their productivity of sulfuric acid.
An important man-made inorganic compound is ammonium nitrate, used for fertilization. The ammonia is produced through the Haber process. Nitric acid is prepared from the ammonia by oxidation. Another large-scale inorganic material is portland cement. Inorganic compounds are used as catalysts such as vanadium(V) oxide for the oxidation of sulfur dioxide and titanium(III) chloride for the polymerization of alkenes. Many inorganic compounds are used as reagents in organic chemistry such as lithium aluminium hydride.
Descriptive inorganic chemistry
Descriptive inorganic chemistry focuses on the classification of compounds based on their properties. Partly the classification focuses on the position in the periodic table of the heaviest element (the element with the highest atomic weight) in the compound, partly by grouping compounds by their structural similarities
Coordination compounds
Classical coordination compounds feature metals bound to "lone pairs" of electrons residing on the main group atoms of ligands such as H2O, NH3, Cl−, and CN−. In modern coordination compounds almost all organic and inorganic compounds can be used as ligands. The "metal" usually is a metal from the groups 3–13, as well as the trans-lanthanides and trans-actinides, but from a certain perspective, all chemical compounds can be described as coordination complexes.
The stereochemistry of coordination complexes can be quite rich, as hinted at by Werner's separation of two enantiomers of [Co((OH)2Co(NH3)4)3]6+, an early demonstration that chirality is not inherent to organic compounds. A topical theme within this specialization is supramolecular coordination chemistry.
Examples: [Co(EDTA)]−, [Co(NH3)6]3+, TiCl4(THF)2.
Coordination compounds show a rich diversity of structures, varying from tetrahedral for titanium (e.g., TiCl4) to square planar for some nickel complexes to octahedral for coordination complexes of cobalt. A range of transition metals can be found in biologically important compounds, such as iron in hemoglobin.
Examples: iron pentacarbonyl, titanium tetrachloride, cisplatin
Main group compounds
These species feature elements from groups I, II, III, IV, V, VI, VII, 0 (excluding hydrogen) of the periodic table. Due to their often similar reactivity, the elements in group 3 (Sc, Y, and La) and group 12 (Zn, Cd, and Hg) are also generally included, and the lanthanides and actinides are sometimes included as well.
Main group compounds have been known since the beginnings of chemistry, e.g., elemental sulfur and the distillable white phosphorus. Experiments on oxygen, O2, by Lavoisier and Priestley not only identified an important diatomic gas, but opened the way for describing compounds and reactions according to stoichiometric ratios. The discovery of a practical synthesis of ammonia using iron catalysts by Carl Bosch and Fritz Haber in the early 1900s deeply impacted mankind, demonstrating the significance of inorganic chemical synthesis.
Typical main group compounds are SiO2, SnCl4, and N2O. Many main group compounds can also be classed as "organometallic", as they contain organic groups, e.g., B(CH3)3). Main group compounds also occur in nature, e.g., phosphate in DNA, and therefore may be classed as bioinorganic. Conversely, organic compounds lacking (many) hydrogen ligands can be classed as "inorganic", such as the fullerenes, buckytubes and binary carbon oxides.
Examples: tetrasulfur tetranitride S4N4, diborane B2H6, silicones, buckminsterfullerene C60.
Noble gas compounds include several derivatives of xenon and krypton.
Examples: xenon hexafluoride XeF6, xenon trioxide XeO3, and krypton difluoride KrF2
Organometallic compounds
Usually, organometallic compounds are considered to contain the M-C-H group. The metal (M) in these species can either be a main group element or a transition metal. Operationally, the definition of an organometallic compound is more relaxed to include also highly lipophilic complexes such as metal carbonyls and even metal alkoxides.
Organometallic compounds are mainly considered a special category because organic ligands are often sensitive to hydrolysis or oxidation, necessitating that organometallic chemistry employs more specialized preparative methods than was traditional in Werner-type complexes. Synthetic methodology, especially the ability to manipulate complexes in solvents of low coordinating power, enabled the exploration of very weakly coordinating ligands such as hydrocarbons, H2, and N2. Because the ligands are petrochemicals in some sense, the area of organometallic chemistry has greatly benefited from its relevance to industry.
Examples: Cyclopentadienyliron dicarbonyl dimer (C5H5)Fe(CO)2CH3, ferrocene Fe(C5H5)2, molybdenum hexacarbonyl Mo(CO)6, triethylborane Et3B, Tris(dibenzylideneacetone)dipalladium(0) Pd2(dba)3)
Cluster compounds
Clusters can be found in all classes of chemical compounds. According to the commonly accepted definition, a cluster consists minimally of a triangular set of atoms that are directly bonded to each other. But metal-metal bonded dimetallic complexes are highly relevant to the area. Clusters occur in "pure" inorganic systems, organometallic chemistry, main group chemistry, and bioinorganic chemistry. The distinction between very large clusters and bulk solids is increasingly blurred. This interface is the chemical basis of nanoscience or nanotechnology and specifically arise from the study of quantum size effects in cadmium selenide clusters. Thus, large clusters can be described as an array of bound atoms intermediate in character between a molecule and a solid.
Examples: Fe3(CO)12, B10H14, [Mo6Cl14]2−, 4Fe-4S
Bioinorganic compounds
By definition, these compounds occur in nature, but the subfield includes anthropogenic species, such as pollutants (e.g., methylmercury) and drugs (e.g., Cisplatin). The field, which incorporates many aspects of biochemistry, includes many kinds of compounds, e.g., the phosphates in DNA, and also metal complexes containing ligands that range from biological macromolecules, commonly peptides, to ill-defined species such as humic acid, and to water (e.g., coordinated to gadolinium complexes employed for MRI). Traditionally bioinorganic chemistry focuses on electron- and energy-transfer in proteins relevant to respiration. Medicinal inorganic chemistry includes the study of both non-essential and essential elements with applications to diagnosis and therapies.
Examples: hemoglobin, methylmercury, carboxypeptidase
Solid state compounds
This important area focuses on structure, bonding, and the physical properties of materials. In practice, solid state inorganic chemistry uses techniques such as crystallography to gain an understanding of the properties that result from collective interactions between the subunits of the solid. Included in solid state chemistry are metals and their alloys or intermetallic derivatives. Related fields are condensed matter physics, mineralogy, and materials science.
Examples: silicon chips, zeolites, YBa2Cu3O7
Spectroscopy and magnetism
In contrast to most organic compounds, many inorganic compounds are magnetic and/or colored. These properties provide information on the bonding and structure. The magnetism of inorganic compounds can be comlex.For example, most copper(II) compounds are paramagnetic but CuII2(OAc)4(H2O)2 is almost diamagnetic below room temperature. The explanation is due to magnetic coupling between pairs of Cu(II) sites in the acetate.
Qualitative theories
Inorganic chemistry has greatly benefited from qualitative theories. Such theories are easier to learn as they require little background in quantum theory. Within main group compounds, VSEPR theory powerfully predicts, or at least rationalizes, the structures of main group compounds, such as an explanation for why NH3 is pyramidal whereas ClF3 is T-shaped. For the transition metals, crystal field theory allows one to understand the magnetism of many simple complexes, such as why [FeIII(CN)6]3− has only one unpaired electron, whereas [FeIII(H2O)6]3+ has five. A particularly powerful qualitative approach to assessing the structure and reactivity begins with classifying molecules according to electron counting, focusing on the numbers of valence electrons, usually at the central atom in a molecule.
Molecular symmetry group theory
A construct in chemistry is molecular symmetry, as embodied in Group theory. Inorganic compounds display a particularly diverse symmetries, so it is logical that Group Theory is intimately associated with inorganic chemistry. Group theory provides the language to describe the shapes of molecules according to their point group symmetry. Group theory also enables factoring and simplification of theoretical calculations.
Spectroscopic features are analyzed and described with respect to the symmetry properties of the, inter alia, vibrational or electronic states. Knowledge of the symmetry properties of the ground and excited states allows one to predict the numbers and intensities of absorptions in vibrational and electronic spectra. A classic application of group theory is the prediction of the number of C-O vibrations in substituted metal carbonyl complexes. The most common applications of symmetry to spectroscopy involve vibrational and electronic spectra.
Group theory highlights commonalities and differences in the bonding of otherwise disparate species. For example, the metal-based orbitals transform identically for WF6 and W(CO)6, but the energies and populations of these orbitals differ significantly. A similar relationship exists CO2 and molecular beryllium difluoride.
Thermodynamics and inorganic chemistry
An alternative quantitative approach to inorganic chemistry focuses on energies of reactions. This approach is highly traditional and empirical, but it is also useful. Broad concepts that are couched in thermodynamic terms include redox potential, acidity, phase changes. A classic concept in inorganic thermodynamics is the Born–Haber cycle, which is used for assessing the energies of elementary processes such as electron affinity, some of which cannot be observed directly.
Mechanistic inorganic chemistry
An important aspect of inorganic chemistry focuses on reaction pathways, i.e. reaction mechanisms.
Main group elements and lanthanides
The mechanisms of main group compounds of groups 13-18 are usually discussed in the context of organic chemistry (organic compounds are main group compounds, after all). Elements heavier than C, N, O, and F often form compounds with more electrons than predicted by the octet rule, as explained in the article on hypervalent molecules. The mechanisms of their reactions differ from organic compounds for this reason. Elements lighter than carbon (B, Be, Li) as well as Al and Mg often form electron-deficient structures that are electronically akin to carbocations. Such electron-deficient species tend to react via associative pathways. The chemistry of the lanthanides mirrors many aspects of chemistry seen for aluminium.
Transition metal complexes
Transition metal and main group compounds often react differently. The important role of d-orbitals in bonding strongly influences the pathways and rates of ligand substitution and dissociation. These themes are covered in articles on coordination chemistry and ligand. Both associative and dissociative pathways are observed.
An overarching aspect of mechanistic transition metal chemistry is the kinetic lability of the complex illustrated by the exchange of free and bound water in the prototypical complexes [M(H2O)6]n+:
[M(H2O)6]n+ + 6 H2O* → [M(H2O*)6]n+ + 6 H2O
where H2O* denotes isotopically enriched water, e.g., H217O
The rates of water exchange varies by 20 orders of magnitude across the periodic table, with lanthanide complexes at one extreme and Ir(III) species being the slowest.
Redox reactions
Redox reactions are prevalent for the transition elements. Two classes of redox reaction are considered: atom-transfer reactions, such as oxidative addition/reductive elimination, and electron-transfer. A fundamental redox reaction is "self-exchange", which involves the degenerate reaction between an oxidant and a reductant. For example, permanganate and its one-electron reduced relative manganate exchange one electron:
[MnO4]− + [Mn*O4]2− → [MnO4]2− + [Mn*O4]−
Reactions at ligands
Coordinated ligands display reactivity distinct from the free ligands. For example, the acidity of the ammonia ligands in [Co(NH3)6]3+ is elevated relative to NH3 itself. Alkenes bound to metal cations are reactive toward nucleophiles whereas alkenes normally are not. The large and industrially important area of catalysis hinges on the ability of metals to modify the reactivity of organic ligands. Homogeneous catalysis occurs in solution and heterogeneous catalysis occurs when gaseous or dissolved substrates interact with surfaces of solids. Traditionally homogeneous catalysis is considered part of organometallic chemistry and heterogeneous catalysis is discussed in the context of surface science, a subfield of solid state chemistry. But the basic inorganic chemical principles are the same. Transition metals, almost uniquely, react with small molecules such as CO, H2, O2, and C2H4. The industrial significance of these feedstocks drives the active area of catalysis. Ligands can also undergo ligand transfer reactions such as transmetalation.
Characterization of inorganic compounds
Because of the diverse range of elements and the correspondingly diverse properties of the resulting derivatives, inorganic chemistry is closely associated with many methods of analysis. Older methods tended to examine bulk properties such as the electrical conductivity of solutions, melting points, solubility, and acidity. With the advent of quantum theory and the corresponding expansion of electronic apparatus, new tools have been introduced to probe the electronic properties of inorganic molecules and solids. Often these measurements provide insights relevant to theoretical models. Commonly encountered techniques are:
X-ray crystallography: This technique allows for the 3D determination of molecular structures.
Various forms of spectroscopy:
Ultraviolet-visible spectroscopy: Historically, this has been an important tool, since many inorganic compounds are strongly colored
NMR spectroscopy: Besides 1H and 13C many other NMR-active nuclei (e.g., 11B, 19F, 31P, and 195Pt) can give important information on compound properties and structure. The NMR of paramagnetic species can provide important structural information. Proton (1H) NMR is also important because the light hydrogen nucleus is not easily detected by X-ray crystallography.
Infrared spectroscopy: Mostly for absorptions from carbonyl ligands
Electron nuclear double resonance (ENDOR) spectroscopy
Mössbauer spectroscopy
Electron-spin resonance: ESR (or EPR) allows for the measurement of the environment of paramagnetic metal centres.
Electrochemistry: Cyclic voltammetry and related techniques probe the redox characteristics of compounds.
Synthetic inorganic chemistry
Although some inorganic species can be obtained in pure form from nature, most are synthesized in chemical plants and in the laboratory.
Inorganic synthetic methods can be classified roughly according to the volatility or solubility of the component reactants. Soluble inorganic compounds are prepared using methods of organic synthesis. For metal-containing compounds that are reactive toward air, Schlenk line and glove box techniques are followed. Volatile compounds and gases are manipulated in "vacuum manifolds" consisting of glass piping interconnected through valves, the entirety of which can be evacuated to 0.001 mm Hg or less. Compounds are condensed using liquid nitrogen (b.p. 78K) or other cryogens. Solids are typically prepared using tube furnaces, the reactants and products being sealed in containers, often made of fused silica (amorphous SiO2) but sometimes more specialized materials such as welded Ta tubes or Pt "boats". Products and reactants are transported between temperature zones to drive reactions.
See also
Important publications in inorganic chemistry
References | 0.80004 | 0.994713 | 0.795811 |
Tautomer | Tautomers are structural isomers (constitutional isomers) of chemical compounds that readily interconvert. The chemical reaction interconverting the two is called tautomerization. This conversion commonly results from the relocation of a hydrogen atom within the compound. The phenomenon of tautomerization is called tautomerism, also called desmotropism. Tautomerism is for example relevant to the behavior of amino acids and nucleic acids, two of the fundamental building blocks of life.
Care should be taken not to confuse tautomers with depictions of "contributing structures" in chemical resonance. Tautomers are distinct chemical species that can be distinguished by their differing atomic connectivities, molecular geometries, and physicochemical and spectroscopic properties, whereas resonance forms are merely alternative Lewis structure (valence bond theory) depictions of a single chemical species, whose true structure is a quantum superposition, essentially the "average" of the idealized, hypothetical geometries implied by these resonance forms.
Examples
Tautomerization is pervasive in organic chemistry. It is typically associated with polar molecules and ions containing functional groups that are at least weakly acidic. Most common tautomers exist in pairs, which means that the hydrogen is located at one of two positions, and even more specifically the most common form involves a hydrogen changing places with a double bond: . Common tautomeric pairs include:
ketone – enol: , see keto–enol tautomerism
enamine – imine:
cyanamide – carbodiimide
guanidine – guanidine – guanidine: With a central carbon surrounded by three nitrogens, a guanidine group allows this transform in three possible orientations
amide – imidic acid: (e.g., the latter is encountered during nitrile hydrolysis reactions)
lactam – lactim, a cyclic form of amide-imidic acid tautomerism in 2-pyridone and derived structures such as the nucleobases guanine, thymine, and cytosine
imine – imine, e.g., during pyridoxal phosphate catalyzed enzymatic reactions
nitro – aci-nitro (nitronic acid):
nitroso – oxime:
ketene – ynol, which involves a triple bond:
amino acid – ammonium carboxylate, which applies to the building blocks of the proteins. This shifts the proton more than two atoms away, producing a zwitterion rather than shifting a double bond:
phosphite – phosphonate: between trivalent and pentavalent phosphorus.
Prototropy
Prototropy is the most common form of tautomerism and refers to the relocation of a hydrogen atom. Prototropic tautomerism may be considered a subset of acid-base behavior. Prototropic tautomers are sets of isomeric protonation states with the same empirical formula and total charge. Tautomerizations are catalyzed by:
bases, involving a series of steps: deprotonation, formation of a delocalized anion (e.g., an enolate), and protonation at a different position of the anion; and
acids, involving a series of steps: protonation, formation of a delocalized cation, and deprotonation at a different position adjacent to the cation).
Two specific further subcategories of tautomerizations:
Annular tautomerism is a type of prototropic tautomerism wherein a proton can occupy two or more positions of the heterocyclic systems found in many drugs, for example, 1H- and 3H-imidazole; 1H-, 2H- and 4H- 1,2,4-triazole; 1H- and 2H- isoindole.
Ring–chain tautomers occur when the movement of the proton is accompanied by a change from an open structure to a ring, such as the open chain and cyclic hemiacetal (typically pyranose or furanose forms) of many sugars. (See .) The tautomeric shift can be described as H−O ⋅ C=O ⇌ O−C−O−H, where the "⋅" indicates the initial absence of a bond.
Valence tautomerism
Valence tautomerism is a type of tautomerism in which single and/or double bonds are rapidly formed and ruptured, without migration of atoms or groups. It is distinct from prototropic tautomerism, and involves processes with rapid reorganisation of bonding electrons.
A pair of valence tautomers with formula C6H6O are benzene oxide and oxepin.
Other examples of this type of tautomerism can be found in bullvalene, and in open and closed forms of certain heterocycles, such as organic azides and tetrazoles, or mesoionic münchnone and acylamino ketene.
Valence tautomerism requires a change in molecular geometry and should not be confused with canonical resonance structures or mesomers.
Inorganic materials
In inorganic extended solids, valence tautomerism can manifest itself in the change of oxidation states its spatial distribution upon the change of macroscopic thermodynamic conditions. Such effects have been called charge ordering or valence mixing to describe the behavior in inorganic oxides.
Consequences for chemical databases
The existence of multiple possible tautomers for individual chemical substances can lead to confusion. For example, samples of 2-pyridone and 2-hydroxypyridine do not exist as separate isolatable materials: the two tautomeric forms are interconvertible and the proportion of each depends on factors such as temperature, solvent, and additional substituents attached to the main ring.
Historically, each form of the substance was entered into databases such as those maintained by the Chemical Abstracts Service and given separate CAS Registry Numbers. 2-Pyridone was assigned [142-08-5] and 2-hydroxypyridine [109-10-4]. The latter is now a "replaced" registry number so that look-up by either identifier reaches the same entry. The facility to automatically recognise such potential tautomerism and ensure that all tautomers are indexed together has been greatly facilitated by the creation of the International Chemical Identifier (InChI) and associated software. Thus the standard InChI for either tautomer is InChI=1S/C5H5NO/c7-5-3-1-2-4-6-5/h1-4H,(H,6,7).
See also
Fluxional molecule
References
External links
Isomerism | 0.80048 | 0.993474 | 0.795256 |
Descriptive research | Descriptive research is used to describe characteristics of a population or phenomenon being studied. It does not answer questions about how/when/why the characteristics occurred. Rather it addresses the "what" question (what are the characteristics of the population or situation being studied?). The characteristics used to describe the situation or population are usually some kind of categorical scheme also known as descriptive categories. For example, the periodic table categorizes the elements. Scientists use knowledge about the nature of electrons, protons and neutrons to devise this categorical scheme. We now take for granted the periodic table, yet it took descriptive research to devise it. Descriptive research generally precedes explanatory research. For example, over time the periodic table's description of the elements allowed scientists to explain chemical reaction and make sound prediction when elements were combined.
Hence, descriptive research cannot describe what caused a situation. Thus, descriptive research cannot be used as the basis of a causal relationship, where one variable affects another. In other words, descriptive research can be said to have a low requirement for internal validity.
The description is used for frequencies, averages, and other statistical calculations. Often the best approach, prior to writing descriptive research, is to conduct a survey investigation. Qualitative research often has the aim of description and researchers may follow up with examinations of why the observations exist and what the implications of the findings are.
Social science research
In addition, the conceptualizing of descriptive research (categorization or taxonomy) precedes the hypotheses of explanatory research. (For a discussion of how the underlying conceptualization of exploratory research, descriptive research and explanatory research fit together, see: Conceptual framework.)
Descriptive research can be statistical research. The main objective of this type of research is to describe the data and characteristics of what is being studied. The idea behind this type of research is to study frequencies, averages, and other statistical calculations. Although this research is highly accurate, it does not gather the causes behind a situation. Descriptive research is mainly done when a researcher wants to gain a better understanding of a topic. That is, analysis of the past as opposed to the future. Descriptive research is the exploration of the existing certain phenomena. The details of the facts won't be known. The existing phenomena's facts are not known to the person.
Descriptive science
Descriptive science is a category of science that involves descriptive research; that is, observing, recording, describing, and classifying phenomena. Descriptive research is sometimes contrasted with hypothesis-driven research, which is focused on testing a particular hypothesis by means of experimentation.
David A. Grimaldi and Michael S. Engel suggest that descriptive science in biology is currently undervalued and misunderstood:
"Descriptive" in science is a pejorative, almost always preceded by "merely," and typically applied to the array of classical -ologies and -omies: anatomy, archaeology, astronomy, embryology, morphology, paleontology, taxonomy, botany, cartography, stratigraphy, and the various disciplines of zoology, to name a few. [...] First, an organism, object, or substance is not described in a vacuum, but rather in comparison with other organisms, objects, and substances. [...] Second, descriptive science is not necessarily low-tech science, and high tech is not necessarily better. [...] Finally, a theory is only as good as what it explains and the evidence (i.e., descriptions) that supports it.
A negative attitude by scientists toward descriptive science is not limited to biological disciplines: Lord Rutherford's notorious quote, "All science is either physics or stamp collecting," displays a clear negative attitude about descriptive science, and it is known that he was dismissive of astronomy, which at the beginning of the 20th century was still gathering largely descriptive data about stars, nebulae, and galaxies, and was only beginning to develop a satisfactory integration of these observations within the framework of physical law, a cornerstone of the philosophy of physics.
Descriptive versus design sciences
Ilkka Niiniluoto has used the terms "descriptive sciences" and "design sciences" as an updated version of the distinction between basic and applied science. According to Niiniluoto, descriptive sciences are those that seek to describe reality, while design sciences seek useful knowledge for human activities.
See also
Methodology
Normative science
Procedural knowledge
Scientific method
References
External links
Descriptive Research from BYU linguistics department
Research
Descriptive statistics
Philosophy of science | 0.802016 | 0.991542 | 0.795232 |
Structural equation modeling | Structural equation modeling (SEM) is a diverse set of methods used by scientists doing both observational and experimental research. SEM is used mostly in the social and behavioral sciences but it is also used in epidemiology, business, and other fields. A definition of SEM is difficult without reference to technical language, but a good starting place is the name itself.
SEM involves a model representing how various aspects of some phenomenon are thought to causally connect to one another. Structural equation models often contain postulated causal connections among some latent variables (variables thought to exist but which can't be directly observed). Additional causal connections link those latent variables to observed variables whose values appear in a data set. The causal connections are represented using equations but the postulated structuring can also be presented using diagrams containing arrows as in Figures 1 and 2. The causal structures imply that specific patterns should appear among the values of the observed variables. This makes it possible to use the connections between the observed variables' values to estimate the magnitudes of the postulated effects, and to test whether or not the observed data are consistent with the requirements of the hypothesized causal structures.
The boundary between what is and is not a structural equation model is not always clear but SE models often contain postulated causal connections among a set of latent variables (variables thought to exist but which can't be directly observed, like an attitude, intelligence or mental illness) and causal connections linking the postulated latent variables to variables that can be observed and whose values are available in some data set. Variations among the styles of latent causal connections, variations among the observed variables measuring the latent variables, and variations in the statistical estimation strategies result in the SEM toolkit including confirmatory factor analysis, confirmatory composite analysis, path analysis, multi-group modeling, longitudinal modeling, partial least squares path modeling, latent growth modeling and hierarchical or multilevel modeling.
SEM researchers use computer programs to estimate the strength and sign of the coefficients corresponding to the modeled structural connections, for example the numbers connected to the arrows in Figure 1. Because a postulated model such as Figure 1 may not correspond to the worldly forces controlling the observed data measurements, the programs also provide model tests and diagnostic clues suggesting which indicators, or which model components, might introduce inconsistency between the model and observed data. Criticisms of SEM methods hint at: disregard of available model tests, problems in the model's specification, a tendency to accept models without considering external validity, and potential philosophical biases.
A great advantage of SEM is that all of these measurements and tests occur simultaneously in one statistical estimation procedure, where all the model coefficients are calculated using all information from the observed variables. This means the estimates are more accurate than if a researcher were to calculate each part of the model separately.
History
Structural equation modeling (SEM) began differentiating itself from correlation and regression when Sewall Wright provided explicit causal interpretations for a set of regression-style equations based on a solid understanding of the physical and physiological mechanisms producing direct and indirect effects among his observed variables. The equations were estimated like ordinary regression equations but the substantive context for the measured variables permitted clear causal, not merely predictive, understandings. O. D. Duncan introduced SEM to the social sciences in his 1975 book and SEM blossomed in the late 1970's and 1980's when increasing computing power permitted practical model estimation. In 1987 Hayduk provided the first book-length introduction to structural equation modeling with latent variables, and this was soon followed by Bollen's popular text (1989).
Different yet mathematically related modeling approaches developed in psychology, sociology, and economics. Early Cowles Commission work on simultaneous equations estimation centered on Koopman and Hood's (1953) algorithms from transport economics and optimal routing, with maximum likelihood estimation, and closed form algebraic calculations, as iterative solution search techniques were limited in the days before computers. The convergence of two of these developmental streams (factor analysis from psychology, and path analysis from sociology via Duncan) produced the current core of SEM. One of several programs Karl Jöreskog developed at Educational Testing Services, LISREL embedded latent variables (which psychologists knew as the latent factors from factor analysis) within path-analysis-style equations (which sociologists inherited from Wright and Duncan). The factor-structured portion of the model incorporated measurement errors which permitted measurement-error-adjustment, though not necessarily error-free estimation, of effects connecting different postulated latent variables.
Traces of the historical convergence of the factor analytic and path analytic traditions persist as the distinction between the measurement and structural portions of models; and as continuing disagreements over model testing, and whether measurement should precede or accompany structural estimates. Viewing factor analysis as a data-reduction technique deemphasizes testing, which contrasts with path analytic appreciation for testing postulated causal connections – where the test result might signal model misspecification. The friction between factor analytic and path analytic traditions continue to surface in the literature.
Wright's path analysis influenced Hermann Wold, Wold's student Karl Jöreskog, and Jöreskog's student Claes Fornell, but SEM never gained a large following among U.S. econometricians, possibly due to fundamental differences in modeling objectives and typical data structures. The prolonged separation of SEM's economic branch led to procedural and terminological differences, though deep mathematical and statistical connections remain. The economic version of SEM can be seen in SEMNET discussions of endogeneity, and in the heat produced as Judea Pearl's approach to causality via directed acyclic graphs (DAG's) rubs against economic approaches to modeling. Discussions comparing and contrasting various SEM approaches are available but disciplinary differences in data structures and the concerns motivating economic models make reunion unlikely. Pearl extended SEM from linear to nonparametric models, and proposed causal and counterfactual interpretations of the equations. Nonparametric SEMs permit estimating total, direct and indirect effects without making any commitment to linearity of effects or assumptions about the distributions of the error terms.
SEM analyses are popular in the social sciences because computer programs make it possible to estimate complicated causal structures, but the complexity of the models introduces substantial variability in the quality of the results. Some, but not all, results are obtained without the "inconvenience" of understanding experimental design, statistical control, the consequences of sample size, and other features contributing to good research design.
General steps and considerations
The following considerations apply to the construction and assessment of many structural equation models.
Model specification
Building or specifying a model requires attending to:
the set of variables to be employed,
what is known about the variables,
what is presumed or hypothesized about the variables' causal connections and disconnections,
what the researcher seeks to learn from the modeling,
and the cases for which values of the variables will be available (kids? workers? companies? countries? cells? accidents? cults?).
Structural equation models attempt to mirror the worldly forces operative for causally homogeneous cases – namely cases enmeshed in the same worldly causal structures but whose values on the causes differ and who therefore possess different values on the outcome variables. Causal homogeneity can be facilitated by case selection, or by segregating cases in a multi-group model. A model's specification is not complete until the researcher specifies:
which effects and/or correlations/covariances are to be included and estimated,
which effects and other coefficients are forbidden or presumed unnecessary,
and which coefficients will be given fixed/unchanging values (e.g. to provide measurement scales for latent variables as in Figure 2).
The latent level of a model is composed of endogenous and exogenous variables. The endogenous latent variables are the true-score variables postulated as receiving effects from at least one other modeled variable. Each endogenous variable is modeled as the dependent variable in a regression-style equation. The exogenous latent variables are background variables postulated as causing one or more of the endogenous variables and are modeled like the predictor variables in regression-style equations. Causal connections among the exogenous variables are not explicitly modeled but are usually acknowledged by modeling the exogenous variables as freely correlating with one another. The model may include intervening variables – variables receiving effects from some variables but also sending effects to other variables. As in regression, each endogenous variable is assigned a residual or error variable encapsulating the effects of unavailable and usually unknown causes. Each latent variable, whether exogenous or endogenous, is thought of as containing the cases' true-scores on that variable, and these true-scores causally contribute valid/genuine variations into one or more of the observed/reported indicator variables.
The LISREL program assigned Greek names to the elements in a set of matrices to keep track of the various model components. These names became relatively standard notation, though the notation has been extended and altered to accommodate a variety of statistical considerations. Texts and programs "simplifying" model specification via diagrams or by using equations permitting user-selected variable names, re-convert the user's model into some standard matrix-algebra form in the background. The "simplifications" are achieved by implicitly introducing default program "assumptions" about model features with which users supposedly need not concern themselves. Unfortunately, these default assumptions easily obscure model components that leave unrecognized issues lurking within the model's structure, and underlying matrices.
Two main components of models are distinguished in SEM: the structural model showing potential causal dependencies between endogenous and exogenous latent variables, and the measurement model showing the causal connections between the latent variables and the indicators. Exploratory and confirmatory factor analysis models, for example, focus on the causal measurement connections, while path models more closely correspond to SEMs latent structural connections.
Modelers specify each coefficient in a model as being free to be estimated, or fixed at some value. The free coefficients may be postulated effects the researcher wishes to test, background correlations among the exogenous variables, or the variances of the residual or error variables providing additional variations in the endogenous latent variables. The fixed coefficients may be values like the 1.0 values in Figure 2 that provide a scales for the latent variables, or values of 0.0 which assert causal disconnections such as the assertion of no-direct-effects (no arrows) pointing from Academic Achievement to any of the four scales in Figure 1. SEM programs provide estimates and tests of the free coefficients, while the fixed coefficients contribute importantly to testing the overall model structure. Various kinds of constraints between coefficients can also be used. The model specification depends on what is known from the literature, the researcher's experience with the modeled indicator variables, and the features being investigated by using the specific model structure.
There is a limit to how many coefficients can be estimated in a model. If there are fewer data points than the number of estimated coefficients, the resulting model is said to be "unidentified" and no coefficient estimates can be obtained. Reciprocal effect, and other causal loops, may also interfere with estimation.
Estimation of free model coefficients
Model coefficients fixed at zero, 1.0, or other values, do not require estimation because they already have specified values. Estimated values for free model coefficients are obtained by maximizing fit to, or minimizing difference from, the data relative to what the data's features would be if the free model coefficients took on the estimated values. The model's implications for what the data should look like for a specific set of coefficient values depends on:
a) the coefficients' locations in the model (e.g. which variables are connected/disconnected),
b) the nature of the connections between the variables (covariances or effects; with effects often assumed to be linear),
c) the nature of the error or residual variables (often assumed to be independent of, or causally-disconnected from, many variables),
and d) the measurement scales appropriate for the variables (interval level measurement is often assumed).
A stronger effect connecting two latent variables implies the indicators of those latents should be more strongly correlated. Hence, a reasonable estimate of a latent's effect will be whatever value best matches the correlations between the indicators of the corresponding latent variables – namely the estimate-value maximizing the match with the data, or minimizing the differences from the data. With maximum likelihood estimation, the numerical values of all the free model coefficients are individually adjusted (progressively increased or decreased from initial start values) until they maximize the likelihood of observing the sample data – whether the data are the variables' covariances/correlations, or the cases' actual values on the indicator variables. Ordinary least squares estimates are the coefficient values that minimize the squared differences between the data and what the data would look like if the model was correctly specified, namely if all the model's estimated features correspond to real worldly features.
The appropriate statistical feature to maximize or minimize to obtain estimates depends on the variables' levels of measurement (estimation is generally easier with interval level measurements than with nominal or ordinal measures), and where a specific variable appears in the model (e.g. endogenous dichotomous variables create more estimation difficulties than exogenous dichotomous variables). Most SEM programs provide several options for what is to be maximized or minimized to obtain estimates the model's coefficients. The choices often include maximum likelihood estimation (MLE), full information maximum likelihood (FIML), ordinary least squares (OLS), weighted least squares (WLS), diagonally weighted least squares (DWLS), and two stage least squares.
One common problem is that a coefficient's estimated value may be underidentified because it is insufficiently constrained by the model and data. No unique best-estimate exists unless the model and data together sufficiently constrain or restrict a coefficient's value. For example, the magnitude of a single data correlation between two variables is insufficient to provide estimates of a reciprocal pair of modeled effects between those variables. The correlation might be accounted for by one of the reciprocal effects being stronger than the other effect, or the other effect being stronger than the one, or by effects of equal magnitude. Underidentified effect estimates can be rendered identified by introducing additional model and/or data constraints. For example, reciprocal effects can be rendered identified by constraining one effect estimate to be double, triple, or equivalent to, the other effect estimate, but the resultant estimates will only be trustworthy if the additional model constraint corresponds to the world's structure. Data on a third variable that directly causes only one of a pair of reciprocally causally connected variables can also assist identification. Constraining a third variable to not directly cause one of the reciprocally-causal variables breaks the symmetry otherwise plaguing the reciprocal effect estimates because that third variable must be more strongly correlated with the variable it causes directly than with the variable at the "other" end of the reciprocal which it impacts only indirectly. Notice that this again presumes the properness of the model's causal specification – namely that there really is a direct effect leading from the third variable to the variable at this end of the reciprocal effects and no direct effect on the variable at the "other end" of the reciprocally connected pair of variables. Theoretical demands for null/zero effects provide helpful constraints assisting estimation, though theories often fail to clearly report which effects are allegedly nonexistent.
Model assessment
Model assessment depends on the theory, the data, the model, and the estimation strategy. Hence model assessments consider:
whether the data contain reasonable measurements of appropriate variables,
whether the modeled case are causally homogeneous, (It makes no sense to estimate one model if the data cases reflect two or more different causal networks.)
whether the model appropriately represents the theory or features of interest, (Models are unpersuasive if they omit features required by a theory, or contain coefficients inconsistent with that theory.)
whether the estimates are statistically justifiable, (Substantive assessments may be devastated: by violating assumptions, by using an inappropriate estimator, and/or by encountering non-convergence of iterative estimators.)
the substantive reasonableness of the estimates, (Negative variances, and correlations exceeding 1.0 or -1.0, are impossible. Statistically possible estimates that are inconsistent with theory may also challenge theory, and our understanding.)
the remaining consistency, or inconsistency, between the model and data. (The estimation process minimizes the differences between the model and data but important and informative differences may remain.)
Research claiming to test or "investigate" a theory requires attending to beyond-chance model-data inconsistency. Estimation adjusts the model's free coefficients to provide the best possible fit to the data. The output from SEM programs includes a matrix reporting the relationships among the observed variables that would be observed if the estimated model effects actually controlled the observed variables' values. The "fit" of a model reports match or mismatch between the model-implied relationships (often covariances) and the corresponding observed relationships among the variables. Large and significant differences between the data and the model's implications signal problems. The probability accompanying a (chi-squared) test is the probability that the data could arise by random sampling variations if the estimated model constituted the real underlying population forces. A small probability reports it would be unlikely for the current data to have arisen if the modeled structure constituted the real population causal forces – with the remaining differences attributed to random sampling variations.
If a model remains inconsistent with the data despite selecting optimal coefficient estimates, an honest research response reports and attends to this evidence (often a significant model test). Beyond-chance model-data inconsistency challenges both the coefficient estimates and the model's capacity for adjudicating the model's structure, irrespective of whether the inconsistency originates in problematic data, inappropriate statistical estimation, or incorrect model specification.
Coefficient estimates in data-inconsistent ("failing") models are interpretable, as reports of how the world would appear to someone believing a model that conflicts with the available data. The estimates in data-inconsistent models do not necessarily become "obviously wrong" by becoming statistically strange, or wrongly signed according to theory. The estimates may even closely match a theory's requirements but the remaining data inconsistency renders the match between the estimates and theory unable to provide succor. Failing models remain interpretable, but only as interpretations that conflict with available evidence.
Replication is unlikely to detect misspecified models which inappropriately-fit the data. If the replicate data is within random variations of the original data, the same incorrect coefficient placements that provided inappropriate-fit to the original data will likely also inappropriately-fit the replicate data. Replication helps detect issues such as data mistakes (made by different research groups), but is especially weak at detecting misspecifications after exploratory model modification – as when confirmatory factor analysis (CFA) is applied to a random second-half of data following exploratory factor analysis (EFA) of first-half data.
A modification index is an estimate of how much a model's fit to the data would "improve" (but not necessarily how much the model's structure would improve) if a specific currently-fixed model coefficient were freed for estimation. Researchers confronting data-inconsistent models can easily free coefficients the modification indices report as likely to produce substantial improvements in fit. This simultaneously introduces a substantial risk of moving from a causally-wrong-and-failing model to a causally-wrong-but-fitting model because improved data-fit does not provide assurance that the freed coefficients are substantively reasonable or world matching. The original model may contain causal misspecifications such as incorrectly directed effects, or incorrect assumptions about unavailable variables, and such problems cannot be corrected by adding coefficients to the current model. Consequently, such models remain misspecified despite the closer fit provided by additional coefficients. Fitting yet worldly-inconsistent models are especially likely to arise if a researcher committed to a particular model (for example a factor model having a desired number of factors) gets an initially-failing model to fit by inserting measurement error covariances "suggested" by modification indices. MacCallum (1986) demonstrated that "even under favorable conditions, models arising from specification serchers must be viewed with caution." Model misspecification may sometimes be corrected by insertion of coefficients suggested by the modification indices, but many more corrective possibilities are raised by employing a few indicators of similar-yet-importantly-different latent variables.
"Accepting" failing models as "close enough" is also not a reasonable alternative. A cautionary instance was provided by Browne, MacCallum, Kim, Anderson, and Glaser who addressed the mathematics behind why the test can have (though it does not always have) considerable power to detect model misspecification. The probability accompanying a test is the probability that the data could arise by random sampling variations if the current model, with its optimal estimates, constituted the real underlying population forces. A small probability reports it would be unlikely for the current data to have arisen if the current model structure constituted the real population causal forces – with the remaining differences attributed to random sampling variations. Browne, McCallum, Kim, Andersen, and Glaser presented a factor model they viewed as acceptable despite the model being significantly inconsistent with their data according to . The fallaciousness of their claim that close-fit should be treated as good enough was demonstrated by Hayduk, Pazkerka-Robinson, Cummings, Levers and Beres who demonstrated a fitting model for Browne, et al.'s own data by incorporating an experimental feature Browne, et al. overlooked. The fault was not in the math of the indices or in the over-sensitivity of testing. The fault was in Browne, MacCallum, and the other authors forgetting, neglecting, or overlooking, that the amount of ill fit cannot be trusted to correspond to the nature, location, or seriousness of problems in a model's specification.
Many researchers tried to justify switching to fit-indices, rather than testing their models, by claiming that increases (and hence probability decreases) with increasing sample size (N). There are two mistakes in discounting on this basis. First, for proper models, does not increase with increasing N, so if increases with N that itself is a sign that something is detectably problematic. And second, for models that are detectably misspecified, increase with N provides the good-news of increasing statistical power to detect model misspecification (namely power to detect Type II error). Some kinds of important misspecifications cannot be detected by , so any amount of ill fit beyond what might be reasonably produced by random variations warrants report and consideration. The model test, possibly adjusted, is the strongest available structural equation model test.
Numerous fit indices quantify how closely a model fits the data but all fit indices suffer from the logical difficulty that the size or amount of ill fit is not trustably coordinated with the severity or nature of the issues producing the data inconsistency. Models with different causal structures which fit the data identically well, have been called equivalent models. Such models are data-fit-equivalent though not causally equivalent, so at least one of the so-called equivalent models must be inconsistent with the world's structure. If there is a perfect 1.0 correlation between X and Y and we model this as X causes Y, there will be perfect fit and zero residual error. But the model may not match the world because Y may actually cause X, or both X and Y may be responding to a common cause Z, or the world may contain a mixture of these effects (e.g. like a common cause plus an effect of Y on X), or other causal structures. The perfect fit does not tell us the model's structure corresponds to the world's structure, and this in turn implies that getting closer to perfect fit does not necessarily correspond to getting closer to the world's structure – maybe it does, maybe it doesn't. This makes it incorrect for a researcher to claim that even perfect model fit implies the model is correctly causally specified. For even moderately complex models, precisely equivalently-fitting models are rare. Models almost-fitting the data, according to any index, unavoidably introduce additional potentially-important yet unknown model misspecifications. These models constitute a greater research impediment.
This logical weakness renders all fit indices "unhelpful" whenever a structural equation model is significantly inconsistent with the data, but several forces continue to propagate fit-index use. For example, Dag Sorbom reported that when someone asked Karl Joreskog, the developer of the first structural equation modeling program, "Why have you then added GFI?" to your LISREL program, Joreskog replied "Well, users threaten us saying they would stop using LISREL if it always produces such large chi-squares. So we had to invent something to make people happy. GFI serves that purpose." The evidence of model-data inconsistency was too statistically solid to be dislodged or discarded, but people could at least be provided a way to distract from the "disturbing" evidence. Career-profits can still be accrued by developing additional indices, reporting investigations of index behavior, and publishing models intentionally burying evidence of model-data inconsistency under an MDI (a mound of distracting indices). There seems no general justification for why a researcher should "accept" a causally wrong model, rather than attempting to correct detected misspecifications. And some portions of the literature seems not to have noticed that "accepting a model" (on the basis of "satisfying" an index value) suffers from an intensified version of the criticism applied to "acceptance" of a null-hypothesis. Introductory statistics texts usually recommend replacing the term "accept" with "failed to reject the null hypothesis" to acknowledge the possibility of Type II error. A Type III error arises from "accepting" a model hypothesis when the current data are sufficient to reject the model.
Whether or not researchers are committed to seeking the world’s structure is a fundamental concern. Displacing test evidence of model-data inconsistency by hiding it behind index claims of acceptable-fit, introduces the discipline-wide cost of diverting attention away from whatever the discipline might have done to attain a structurally-improved understanding of the discipline’s substance. The discipline ends up paying a real costs for index-based displacement of evidence of model misspecification. The frictions created by disagreements over the necessity of correcting model misspecifications will likely increase with increasing use of non-factor-structured models, and with use of fewer, more-precise, indicators of similar yet importantly-different latent variables.
The considerations relevant to using fit indices include checking:
whether data concerns have been addressed (to ensure data mistakes are not driving model-data inconsistency);
whether criterion values for the index have been investigated for models structured like the researcher's model (e.g. index criterion based on factor structured models are only appropriate if the researcher's model actually is factor structured);
whether the kinds of potential misspecifications in the current model correspond to the kinds of misspecifications on which the index criterion are based (e.g. criteria based on simulation of omitted factor loadings may not be appropriate for misspecification resulting from failure to include appropriate control variables);
whether the researcher knowingly agrees to disregard evidence pointing to the kinds of misspecifications on which the index criteria were based. (If the index criterion is based on simulating a missing factor loading or two, using that criterion acknowledges the researcher's willingness to accept a model missing a factor loading or two.);
whether the latest, not outdated, index criteria are being used (because the criteria for some indices tightened over time);
whether satisfying criterion values on pairs of indices are required (e.g. Hu and Bentler report that some common indices function inappropriately unless they are assessed together.);
whether a model test is, or is not, available. (A value, degrees of freedom, and probability will be available for models reporting indices based on .)
and whether the researcher has considered both alpha (Type I) and beta (Type II) errors in making their index-based decisions (E.g. if the model is significantly data-inconsistent, the "tolerable" amount of inconsistency is likely to differ in the context of medical, business, social and psychological contexts.).
Some of the more commonly used fit statistics include
Chi-square
A fundamental test of fit used in the calculation of many other fit measures. It is a function of the discrepancy between the observed covariance matrix and the model-implied covariance matrix. Chi-square increases with sample size only if the model is detectably misspecified.
Akaike information criterion (AIC)
An index of relative model fit: The preferred model is the one with the lowest AIC value.
where k is the number of parameters in the statistical model, and L is the maximized value of the likelihood of the model.
Root Mean Square Error of Approximation (RMSEA)
Fit index where a value of zero indicates the best fit. Guidelines for determining a "close fit" using RMSEA are highly contested.
Standardized Root Mean Squared Residual (SRMR)
The SRMR is a popular absolute fit indicator. Hu and Bentler (1999) suggested .08 or smaller as a guideline for good fit.
Comparative Fit Index (CFI)
In examining baseline comparisons, the CFI depends in large part on the average size of the correlations in the data. If the average correlation between variables is not high, then the CFI will not be very high. A CFI value of .95 or higher is desirable.
The following table provides references documenting these, and other, features for some common indices: the RMSEA (Root Mean Square Error of Approximation), SRMR (Standardized Root Mean Squared Residual), CFI (Confirmatory Fit Index), and the TLI (the Tucker-Lewis Index). Additional indices such as the AIC (Akaike Information Criterion) can be found in most SEM introductions. For each measure of fit, a decision as to what represents a good-enough fit between the model and the data reflects the researcher's modeling objective (perhaps challenging someone else's model, or improving measurement); whether or not the model is to be claimed as having been "tested"; and whether the researcher is comfortable "disregarding" evidence of the index-documented degree of ill fit.
Sample size, power, and estimation
Researchers agree samples should be large enough to provide stable coefficient estimates and reasonable testing power but there is no general consensus regarding specific required sample sizes, or even how to determine appropriate sample sizes. Recommendations have been based on the number of coefficients to be estimated, the number of modeled variables, and Monte Carlo simulations addressing specific model coefficients. Sample size recommendations based on the ratio of the number of indicators to latents are factor oriented and do not apply to models employing single indicators having fixed nonzero measurement error variances. Overall, for moderate sized models without statistically difficult-to-estimate coefficients, the required sample sizes (N’s) seem roughly comparable to the N’s required for a regression employing all the indicators.
The larger the sample size, the greater the likelihood of including cases that are not causally homogeneous. Consequently, increasing N to improve the likelihood of being able to report a desired coefficient as statistically significant, simultaneously increases the risk of model misspecification, and the power to detect the misspecification. Researchers seeking to learn from their modeling (including potentially learning their model requires adjustment or replacement) will strive for as large a sample size as permitted by funding and by their assessment of likely population-based causal heterogeneity/homogeneity. If the available N is huge, modeling sub-sets of cases can control for variables that might otherwise disrupt causal homogeneity. Researchers fearing they might have to report their model’s deficiencies are torn between wanting a larger N to provide sufficient power to detect structural coefficients of interest, while avoiding the power capable of signaling model-data inconsistency. The huge variation in model structures and data characteristics suggests adequate sample sizes might be usefully located by considering other researchers’ experiences (both good and bad) with models of comparable size and complexity that have been estimated with similar data.
Interpretation
Causal interpretations of SE models are the clearest and most understandable but those interpretations will be fallacious/wrong if the model’s structure does not correspond to the world’s causal structure. Consequently, interpretation should address the overall status and structure of the model, not merely the model’s estimated coefficients. Whether a model fits the data, and/or how a model came to fit the data, are paramount for interpretation. Data fit obtained by exploring, or by following successive modification indices, does not guarantee the model is wrong but raises serious doubts because these approaches are prone to incorrectly modeling data features. For example, exploring to see how many factors are required preempts finding the data are not factor structured, especially if the factor model has been “persuaded” to fit via inclusion of measurement error covariances. Data’s ability to speak against a postulated model is progressively eroded with each unwarranted inclusion of a “modification index suggested” effect or error covariance. It becomes exceedingly difficult to recover a proper model if the initial/base model contains several misspecifications.
Direct-effect estimates are interpreted in parallel to the interpretation of coefficients in regression equations but with causal commitment. Each unit increase in a causal variable’s value is viewed as producing a change of the estimated magnitude in the dependent variable’s value given control or adjustment for all the other operative/modeled causal mechanisms. Indirect effects are interpreted similarly, with the magnitude of a specific indirect effect equaling the product of the series of direct effects comprising that indirect effect. The units involved are the real scales of observed variables’ values, and the assigned scale values for latent variables. A specified/fixed 1.0 effect of a latent on a specific indicator coordinates that indicator’s scale with the latent variable’s scale. The presumption that the remainder of the model remains constant or unchanging may require discounting indirect effects that might, in the real world, be simultaneously prompted by a real unit increase. And the unit increase itself might be inconsistent with what is possible in the real world because there may be no known way to change the causal variable’s value. If a model adjusts for measurement errors, the adjustment permits interpreting latent-level effects as referring to variations in true scores.
SEM interpretations depart most radically from regression interpretations when a network of causal coefficients connects the latent variables because regressions do not contain estimates of indirect effects. SEM interpretations should convey the consequences of the patterns of indirect effects that carry effects from background variables through intervening variables to the downstream dependent variables. SEM interpretations encourage understanding how multiple worldly causal pathways can work in coordination, or independently, or even counteract one another. Direct effects may be counteracted (or reinforced) by indirect effects, or have their correlational implications counteracted (or reinforced) by the effects of common causes. The meaning and interpretation of specific estimates should be contextualized in the full model.
SE model interpretation should connect specific model causal segments to their variance and covariance implications. A single direct effect reports that the variance in the independent variable produces a specific amount of variation in the dependent variable’s values, but the causal details of precisely what makes this happens remains unspecified because a single effect coefficient does not contain sub-components available for integration into a structured story of how that effect arises. A more fine-grained SE model incorporating variables intervening between the cause and effect would be required to provide features constituting a story about how any one effect functions. Until such a model arrives each estimated direct effect retains a tinge of the unknown, thereby invoking the essence of a theory. A parallel essential unknownness would accompany each estimated coefficient in even the more fine-grained model, so the sense of fundamental mystery is never fully eradicated from SE models.
Even if each modeled effect is unknown beyond the identity of the variables involved and the estimated magnitude of the effect, the structures linking multiple modeled effects provide opportunities to express how things function to coordinate the observed variables – thereby providing useful interpretation possibilities. For example, a common cause contributes to the covariance or correlation between two effected variables, because if the value of the cause goes up, the values of both effects should also go up (assuming positive effects) even if we do not know the full story underlying each cause. (A correlation is the covariance between two variables that have both been standardized to have variance 1.0). Another interpretive contribution might be made by expressing how two causal variables can both explain variance in a dependent variable, as well as how covariance between two such causes can increase or decrease explained variance in the dependent variable. That is, interpretation may involve explaining how a pattern of effects and covariances can contribute to decreasing a dependent variable’s variance. Understanding causal implications implicitly connects to understanding “controlling”, and potentially explaining why some variables, but not others, should be controlled. As models become more complex these fundamental components can combine in non-intuitive ways, such as explaining how there can be no correlation (zero covariance) between two variables despite the variables being connected by a direct non-zero causal effect.
The statistical insignificance of an effect estimate indicates the estimate could rather easily arise as a random sampling variation around a null/zero effect, so interpreting the estimate as a real effect becomes equivocal. As in regression, the proportion of each dependent variable’s variance explained by variations in the modeled causes are provided by R2, though the Blocked-Error R2 should be used if the dependent variable is involved in reciprocal or looped effects, or if it has an error variable correlated with any predictor’s error variable.
The caution appearing in the Model Assessment section warrants repeat. Interpretation should be possible whether a model is or is not consistent with the data. The estimates report how the world would appear to someone believing the model – even if that belief is unfounded because the model happens to be wrong. Interpretation should acknowledge that the model coefficients may or may not correspond to “parameters” – because the model’s coefficients may not have corresponding worldly structural features.
Adding new latent variables entering or exiting the original model at a few clear causal locations/variables contributes to detecting model misspecifications which could otherwise ruin coefficient interpretations. The correlations between the new latent’s indicators and all the original indicators contribute to testing the original model’s structure because the few new and focused effect coefficients must work in coordination with the model’s original direct and indirect effects to coordinate the new indicators with the original indicators. If the original model’s structure was problematic, the sparse new causal connections will be insufficient to coordinate the new indicators with the original indicators, thereby signaling the inappropriateness of the original model’s coefficients through model-data inconsistency. The correlational constraints grounded in null/zero effect coefficients, and coefficients assigned fixed nonzero values, contribute to both model testing and coefficient estimation, and hence deserve acknowledgment as the scaffolding supporting the estimates and their interpretation.
Interpretations become progressively more complex for models containing interactions, nonlinearities, multiple groups, multiple levels, and categorical variables. Effects touching causal loops, reciprocal effects, or correlated residuals also require slightly revised interpretations.
Careful interpretation of both failing and fitting models can provide research advancement. To be dependable, the model should investigate academically informative causal structures, fit applicable data with understandable estimates, and not include vacuous coefficients. Dependable fitting models are rarer than failing models or models inappropriately bludgeoned into fitting, but appropriately-fitting models are possible.
The multiple ways of conceptualizing PLS models complicate interpretation of PLS models. Many of the above comments are applicable if a PLS modeler adopts a realist perspective by striving to ensure their modeled indicators combine in a way that matches some existing but unavailable latent variable. Non-causal PLS models, such as those focusing primarily on R2 or out-of-sample predictive power, change the interpretation criteria by diminishing concern for whether or not the model’s coefficients have worldly counterparts. The fundamental features differentiating the five PLS modeling perspectives discussed by Rigdon, Sarstedt and Ringle point to differences in PLS modelers’ objectives, and corresponding differences in model features warranting interpretation.
Caution should be taken when making claims of causality even when experiments or time-ordered investigations have been undertaken. The term causal model must be understood to mean "a model that conveys causal assumptions", not necessarily a model that produces validated causal conclusions—maybe it does maybe it does not. Collecting data at multiple time points and using an experimental or quasi-experimental design can help rule out certain rival hypotheses but even a randomized experiments cannot fully rule out threats to causal claims. No research design can fully guarantee causal structures.
Controversies and movements
Structural equation modeling is fraught with controversies. Researchers from the factor analytic tradition commonly attempt to reduce sets of multiple indicators to fewer, more manageable, scales or factor-scores for later use in path-structured models. This constitutes a stepwise process with the initial measurement step providing scales or factor-scores which are to be used later in a path-structured model. This stepwise approach seems obvious but actually confronts severe underlying deficiencies. The segmentation into steps interferes with thorough checking of whether the scales or factor-scores validly represent the indicators, and/or validly report on latent level effects. A structural equation model simultaneously incorporating both the measurement and latent-level structures not only checks whether the latent factors appropriately coordinates the indicators, it also checks whether that same latent simultaneously appropriately coordinates each latent’s indictors with the indicators of theorized causes and/or consequences of that latent. If a latent is unable to do both these styles of coordination, the validity of that latent is questioned, and a scale or factor-scores purporting to measure that latent is questioned. The disagreements swirled around respect for, or disrespect of, evidence challenging the validity of postulated latent factors. The simmering, sometimes boiling, discussions resulted in a special issue of the journal Structural Equation Modeling focused on a target article by Hayduk and Glaser followed by several comments and a rejoinder, all made freely available, thanks to the efforts of George Marcoulides.
These discussions fueled disagreement over whether or not structural equation models should be tested for consistency with the data, and model testing became the next focus of discussions. Scholars having path-modeling histories tended to defend careful model testing while those with factor-histories tended to defend fit-indexing rather than fit-testing. These discussions led to a target article in Personality and Individual Differences by Paul Barrett who said: “In fact, I would now recommend banning ALL such indices from ever appearing in any paper as indicative of model “acceptability” or “degree of misfit”.” (page 821). Barrett’s article was also accompanied by commentary from both perspectives.
The controversy over model testing declined as clear reporting of significant model-data inconsistency becomes mandatory. Scientists do not get to ignore, or fail to report, evidence just because they do not like what the evidence reports. The requirement of attending to evidence pointing toward model mis-specification underpins more recent concern for addressing “endogeneity” – a style of model mis-specification that interferes with estimation due to lack of independence of error/residual variables. In general, the controversy over the causal nature of structural equation models, including factor-models, has also been declining. Stan Mulaik, a factor-analysis stalwart, has acknowledged the causal basis of factor models. The comments by Bollen and Pearl regarding myths about causality in the context of SEM reinforced the centrality of causal thinking in the context of SEM.
A briefer controversy focused on competing models. Comparing competing models can be very helpful but there are fundamental issues that cannot be resolved by creating two models and retaining the better fitting model. The statistical sophistication of presentations like Levy and Hancock (2007), for example, makes it easy to overlook that a researcher might begin with one terrible model and one atrocious model, and end by retaining the structurally terrible model because some index reports it as better fitting than the atrocious model. It is unfortunate that even otherwise strong SEM texts like Kline (2016) remain disturbingly weak in their presentation of model testing. Overall, the contributions that can be made by structural equation modeling depend on careful and detailed model assessment, even if a failing model happens to be the best available.
An additional controversy that touched the fringes of the previous controversies awaits ignition. Factor models and theory-embedded factor structures having multiple indicators tend to fail, and dropping weak indicators tends to reduce the model-data inconsistency. Reducing the number of indicators leads to concern for, and controversy over, the minimum number of indicators required to support a latent variable in a structural equation model. Researchers tied to factor tradition can be persuaded to reduce the number of indicators to three per latent variable, but three or even two indicators may still be inconsistent with a proposed underlying factor common cause. Hayduk and Littvay (2012) discussed how to think about, defend, and adjust for measurement error, when using only a single indicator for each modeled latent variable. Single indicators have been used effectively in SE models for a long time, but controversy remains only as far away as a reviewer who has considered measurement from only the factor analytic perspective.
Though declining, traces of these controversies are scattered throughout the SEM literature, and you can easily incite disagreement by asking: What should be done with models that are significantly inconsistent with the data? Or by asking: Does model simplicity override respect for evidence of data inconsistency? Or, what weight should be given to indexes which show close or not-so-close data fit for some models? Or, should we be especially lenient toward, and “reward”, parsimonious models that are inconsistent with the data? Or, given that the RMSEA condones disregarding some real ill fit for each model degree of freedom, doesn’t that mean that people testing models with null-hypotheses of non-zero RMSEA are doing deficient model testing? Considerable variation in statistical sophistication is required to cogently address such questions, though responses will likely center on the non-technical matter of whether or not researchers are required to report and respect evidence.
Extensions, modeling alternatives, and statistical kin
Categorical dependent variables
Categorical intervening variables
Copulas
Deep Path Modelling
Exploratory Structural Equation Modeling
Fusion validity models
Item response theory models
Latent class models
Latent growth modeling
Link functions
Longitudinal models
Measurement invariance models
Mixture model
Multilevel models, hierarchical models (e.g. people nested in groups)
Multiple group modelling with or without constraints between groups (genders, cultures, test forms, languages, etc.)
Multi-method multi-trait models
Random intercepts models
Structural Equation Model Trees
Structural Equation Multidimensional scaling
Software
Structural equation modeling programs differ widely in their capabilities and user requirements.
See also
References
Bibliography
Further reading
Bartholomew, D. J., and Knott, M. (1999) Latent Variable Models and Factor Analysis Kendall's Library of Statistics, vol. 7, Edward Arnold Publishers,
Bentler, P.M. & Bonett, D.G. (1980), "Significance tests and goodness of fit in the analysis of covariance structures", Psychological Bulletin, 88, 588–606.
Bollen, K. A. (1989). Structural Equations with Latent Variables. Wiley,
Byrne, B. M. (2001) Structural Equation Modeling with AMOS - Basic Concepts, Applications, and Programming.LEA,
Goldberger, A. S. (1972). Structural equation models in the social sciences. Econometrica 40, 979- 1001.
Hoyle, R H (ed) (1995) Structural Equation Modeling: Concepts, Issues, and Applications. SAGE,
.
External links
Structural equation modeling page under David Garson's StatNotes, NCSU
Issues and Opinion on Structural Equation Modeling, SEM in IS Research
The causal interpretation of structural equations (or SEM survival kit) by Judea Pearl 2000.
Structural Equation Modeling Reference List by Jason Newsom: journal articles and book chapters on structural equation models
Handbook of Management Scales, a collection of previously used multi-item scales to measure constructs for SEM
Graphical models
Latent variable models
Regression models
Structural equation models | 0.797778 | 0.996226 | 0.794767 |
Cheminformatics | Cheminformatics (also known as chemoinformatics) refers to the use of physical chemistry theory with computer and information science techniques—so called "in silico" techniques—in application to a range of descriptive and prescriptive problems in the field of chemistry, including in its applications to biology and related molecular fields. Such in silico techniques are used, for example, by pharmaceutical companies and in academic settings to aid and inform the process of drug discovery, for instance in the design of well-defined combinatorial libraries of synthetic compounds, or to assist in structure-based drug design. The methods can also be used in chemical and allied industries, and such fields as environmental science and pharmacology, where chemical processes are involved or studied.
History
Cheminformatics has been an active field in various guises since the 1970s and earlier, with activity in academic departments and commercial pharmaceutical research and development departments. The term chemoinformatics was defined in its application to drug discovery by F.K. Brown in 1998:Chemoinformatics is the mixing of those information resources to transform data into information and information into knowledge for the intended purpose of making better decisions faster in the area of drug lead identification and optimization. Since then, both terms, cheminformatics and chemoinformatics, have been used, although, lexicographically, cheminformatics appears to be more frequently used, despite academics in Europe declaring for the variant chemoinformatics in 2006. In 2009, a prominent Springer journal in the field was founded by transatlantic executive editors named the Journal of Cheminformatics.
Background
Cheminformatics combines the scientific working fields of chemistry, computer science, and information science—for example in the areas of topology, chemical graph theory, information retrieval and data mining in the chemical space. Cheminformatics can also be applied to data analysis for various industries like paper and pulp, dyes and such allied industries.
Applications
Storage and retrieval
A primary application of cheminformatics is the storage, indexing, and search of information relating to chemical compounds. The efficient search of such stored information includes topics that are dealt with in computer science, such as data mining, information retrieval, information extraction, and machine learning. Related research topics include:
Digital libraries
Unstructured data
Structured data mining and mining of structured data
Database mining
Graph mining
Molecule mining
Sequence mining
Tree mining
File formats
The in silico representation of chemical structures uses specialized formats such as the Simplified molecular input line entry specifications (SMILES) or the XML-based Chemical Markup Language. These representations are often used for storage in large chemical databases. While some formats are suited for visual representations in two- or three-dimensions, others are more suited for studying physical interactions, modeling and docking studies.
Virtual libraries
Chemical data can pertain to real or virtual molecules. Virtual libraries of compounds may be generated in various ways to explore chemical space and hypothesize novel compounds with desired properties. Virtual libraries of classes of compounds (drugs, natural products, diversity-oriented synthetic products) were recently generated using the FOG (fragment optimized growth) algorithm. This was done by using cheminformatic tools to train transition probabilities of a Markov chain on authentic classes of compounds, and then using the Markov chain to generate novel compounds that were similar to the training database.
Virtual screening
In contrast to high-throughput screening, virtual screening involves computationally
screening in silico libraries of compounds, by means of various methods such as
docking, to identify members likely to possess desired properties
such as biological activity against a given target. In some cases, combinatorial chemistry is used in the development of the library to increase the efficiency in mining the chemical space. More commonly, a diverse library of small molecules or natural products is screened.
Quantitative structure-activity relationship (QSAR)
This is the calculation of quantitative structure–activity relationship and quantitative structure property relationship values, used to predict the activity of compounds from their structures. In this context there is also a strong relationship to chemometrics. Chemical expert systems are also relevant, since they represent parts of chemical knowledge as an in silico representation. There is a relatively new concept of matched molecular pair analysis or prediction-driven MMPA which is coupled with QSAR model in order to identify activity cliff.
See also
Bioinformatics
Chemical file format
Chemicalize.org
Cheminformatics toolkits
Chemogenomics
Computational chemistry
Information engineering
Journal of Chemical Information and Modeling
Journal of Cheminformatics
Materials informatics
Molecular design software
Molecular graphics
Molecular Informatics
Molecular modelling
Nanoinformatics
Software for molecular modeling
WorldWide Molecular Matrix
Molecular descriptor
References
Further reading
External links
Computational chemistry
Drug discovery
Computational fields of study
Applied statistics | 0.810108 | 0.980826 | 0.794575 |
Lability | Lability refers to something that is constantly undergoing change or is likely to undergo change. It is the opposite (antonym) of stability.
Biochemistry
In reference to biochemistry, this is an important concept as far as kinetics is concerned in metalloproteins. This can allow for the rapid synthesis and degradation of substrates in biological systems.
Biology
Cells
Labile cells refer to cells that constantly divide by entering and remaining in the cell cycle. These are contrasted with "stable cells" and "permanent cells".
An important example of this is in the epithelium of the cornea, where cells divide at the basal level and move upwards, and the topmost cells die and fall off.
Proteins
In medicine, the term "labile" means susceptible to alteration or destruction. For example, a heat-labile protein is one that can be changed or destroyed at high temperatures.
The opposite of labile in this context is "stable".
Soils
Compounds or materials that are easily transformed (often by biological activity) are termed labile. For example, labile phosphate is that fraction of soil phosphate that is readily transformed into soluble or plant-available phosphate. Labile organic matter is the soil organic matter that is easily decomposed by microorganisms.
Chemistry
The term is used to describe a transient chemical species. As a general example, if a molecule exists in a particular conformation for a short lifetime, before adopting a lower energy conformation (structural arrangement), the former molecular structure is said to have 'high lability' (such as C25, a 25-carbon fullerene spheroid). The term is sometimes also used in reference to reactivity – for example, a complex that quickly reaches equilibrium in solution is said to be labile (with respect to that solution). Another common example is the cis effect in organometallic chemistry, which is the labilization of CO ligands in the cis position of octahedral transition metal complexes.
See also
Chemical stability
Equilibrium chemistry
Dynamic equilibrium
Instability
Metastability
Reaction intermediate
Emotional lability
References
Chemical reactions | 0.804047 | 0.987655 | 0.794121 |
Acclimatization | Acclimatization or acclimatisation (also called acclimation or acclimatation) is the process in which an individual organism adjusts to a change in its environment (such as a change in altitude, temperature, humidity, photoperiod, or pH), allowing it to maintain fitness across a range of environmental conditions. Acclimatization occurs in a short period of time (hours to weeks), and within the organism's lifetime (compared to adaptation, which is evolution, taking place over many generations). This may be a discrete occurrence (for example, when mountaineers acclimate to high altitude over hours or days) or may instead represent part of a periodic cycle, such as a mammal shedding heavy winter fur in favor of a lighter summer coat. Organisms can adjust their morphological, behavioral, physical, and/or biochemical traits in response to changes in their environment. While the capacity to acclimate to novel environments has been well documented in thousands of species, researchers still know very little about how and why organisms acclimate the way that they do.
Names
The nouns acclimatization and acclimation (and the corresponding verbs acclimatize and acclimate) are widely regarded as synonymous, both in general vocabulary and in medical vocabulary. The synonym acclimation is less commonly encountered, and fewer dictionaries enter it.
Methods
Biochemical
In order to maintain performance across a range of environmental conditions, there are several strategies organisms use to acclimate. In response to changes in temperature, organisms can change the biochemistry of cell membranes making them more fluid in cold temperatures and less fluid in warm temperatures by increasing the number of membrane proteins. In response to certain stressors, some organisms express so-called heat shock proteins that act as molecular chaperones and reduce denaturation by guiding the folding and refolding of proteins. It has been shown that organisms which are acclimated to high or low temperatures display relatively high resting levels of heat shock proteins so that when they are exposed to even more extreme temperatures the proteins are readily available. Expression of heat shock proteins and regulation of membrane fluidity are just two of many biochemical methods organisms use to acclimate to novel environments.
Morphological
Organisms are able to change several characteristics relating to their morphology in order to maintain performance in novel environments. For example, birds often increase their organ size to increase their metabolism. This can take the form of an increase in the mass of nutritional organs or heat-producing organs, like the pectorals (with the latter being more consistent across species).
The theory
While the capacity for acclimatization has been documented in thousands of species, researchers still know very little about how and why organisms acclimate in the way that they do. Since researchers first began to study acclimation, the overwhelming hypothesis has been that all acclimation serves to enhance the performance of the organism. This idea has come to be known as the beneficial acclimation hypothesis. Despite such widespread support for the beneficial acclimation hypothesis, not all studies show that acclimation always serves to enhance performance (See beneficial acclimation hypothesis). One of the major objections to the beneficial acclimation hypothesis is that it assumes that there are no costs associated with acclimation. However, there are likely to be costs associated with acclimation. These include the cost of sensing the environmental conditions and regulating responses, producing structures required for plasticity (such as the energetic costs in expressing heat shock proteins), and genetic costs (such as linkage of plasticity-related genes with harmful genes).
Given the shortcomings of the beneficial acclimation hypothesis, researchers are continuing to search for a theory that will be supported by empirical data.
The degree to which organisms are able to acclimate is dictated by their phenotypic plasticity or the ability of an organism to change certain traits. Recent research in the study of acclimation capacity has focused more heavily on the evolution of phenotypic plasticity rather than acclimation responses. Scientists believe that when they understand more about how organisms evolved the capacity to acclimate, they will better understand acclimation.
Examples
Plants
Many plants, such as maple trees, irises, and tomatoes, can survive freezing temperatures if the temperature gradually drops lower and lower each night over a period of days or weeks. The same drop might kill them if it occurred suddenly. Studies have shown that tomato plants that were acclimated to higher temperature over several days were more efficient at photosynthesis at relatively high temperatures than were plants that were not allowed to acclimate.
In the orchid Phalaenopsis, phenylpropanoid enzymes are enhanced in the process of plant acclimatisation at different levels of photosynthetic photon flux.
Animals
Animals acclimatize in many ways. Sheep grow very thick wool in cold, damp climates. Fish are able to adjust only gradually to changes in water temperature and quality. Tropical fish sold at pet stores are often kept in acclimatization bags until this process is complete. Lowe & Vance (1995) were able to show that lizards acclimated to warm temperatures could maintain a higher running speed at warmer temperatures than lizards that were not acclimated to warm conditions. Fruit flies that develop at relatively cooler or warmer temperatures have increased cold or heat tolerance as adults, respectively (See Developmental plasticity).
Humans
The salt content of sweat and urine decreases as people acclimatize to hot conditions. Plasma volume, heart rate, and capillary activation are also affected.
Acclimatization to high altitude continues for months or even years after initial ascent, and ultimately enables humans to survive in an environment that, without acclimatization, would kill them. Humans who migrate permanently to a higher altitude naturally acclimatize to their new environment by developing an increase in the number of red blood cells to increase the oxygen carrying capacity of the blood, in order to compensate for lower levels of oxygen intake.
See also
Acclimatisation society
Beneficial acclimation hypothesis
Heat index
Introduced species
Phenotypic plasticity
Wind chill
References
Physiology
Ecological processes
Climate
Biology terminology | 0.798097 | 0.994293 | 0.793542 |
Bioorganic chemistry | Bioorganic chemistry is a scientific discipline that combines organic chemistry and biochemistry. It is that branch of life science that deals with the study of biological processes using chemical methods. Protein and enzyme function are examples of these processes.
Sometimes biochemistry is used interchangeably for bioorganic chemistry; the distinction being that bioorganic chemistry is organic chemistry that is focused on the biological aspects. While biochemistry aims at understanding biological processes using chemistry, bioorganic chemistry attempts to expand organic-chemical researches (that is, structures, synthesis, and kinetics) toward biology. When investigating metalloenzymes and cofactors, bioorganic chemistry overlaps bioinorganic chemistry.
Sub disciplines
Biophysical organic chemistry is a term used when attempting to describe intimate details of molecular recognition by bioorganic chemistry.
Natural product chemistry is the process of Identifying compounds found in nature to determine their properties. Compound discoveries have and often lead to medicinal uses, development of herbicides and insecticides.
References
Biochemistry | 0.821484 | 0.964827 | 0.79259 |
Bioinformatics | Bioinformatics is an interdisciplinary field of science that develops methods and software tools for understanding biological data, especially when the data sets are large and complex. Bioinformatics uses biology, chemistry, physics, computer science, computer programming, information engineering, mathematics and statistics to analyze and interpret biological data. The subsequent process of analyzing and interpreting data is often referred to as computational biology, though the distinction between the two terms is often disputed.
Computational, statistical, and computer programming techniques have been used for computer simulation analyses of biological queries. They include reused specific analysis "pipelines", particularly in the field of genomics, such as by the identification of genes and single nucleotide polymorphisms (SNPs). These pipelines are used to better understand the genetic basis of disease, unique adaptations, desirable properties (esp. in agricultural species), or differences between populations. Bioinformatics also includes proteomics, which tries to understand the organizational principles within nucleic acid and protein sequences.
Image and signal processing allow extraction of useful results from large amounts of raw data. In the field of genetics, it aids in sequencing and annotating genomes and their observed mutations. Bioinformatics includes text mining of biological literature and the development of biological and gene ontologies to organize and query biological data. It also plays a role in the analysis of gene and protein expression and regulation. Bioinformatics tools aid in comparing, analyzing and interpreting genetic and genomic data and more generally in the understanding of evolutionary aspects of molecular biology. At a more integrative level, it helps analyze and catalogue the biological pathways and networks that are an important part of systems biology. In structural biology, it aids in the simulation and modeling of DNA, RNA, proteins as well as biomolecular interactions.
History
The first definition of the term bioinformatics was coined by Paulien Hogeweg and Ben Hesper in 1970, to refer to the study of information processes in biotic systems. This definition placed bioinformatics as a field parallel to biochemistry (the study of chemical processes in biological systems).
Bioinformatics and computational biology involved the analysis of biological data, particularly DNA, RNA, and protein sequences. The field of bioinformatics experienced explosive growth starting in the mid-1990s, driven largely by the Human Genome Project and by rapid advances in DNA sequencing technology.
Analyzing biological data to produce meaningful information involves writing and running software programs that use algorithms from graph theory, artificial intelligence, soft computing, data mining, image processing, and computer simulation. The algorithms in turn depend on theoretical foundations such as discrete mathematics, control theory, system theory, information theory, and statistics.
Sequences
There has been a tremendous advance in speed and cost reduction since the completion of the Human Genome Project, with some labs able to sequence over 100,000 billion bases each year, and a full genome can be sequenced for $1,000 or less.
Computers became essential in molecular biology when protein sequences became available after Frederick Sanger determined the sequence of insulin in the early 1950s. Comparing multiple sequences manually turned out to be impractical. Margaret Oakley Dayhoff, a pioneer in the field, compiled one of the first protein sequence databases, initially published as books as well as methods of sequence alignment and molecular evolution. Another early contributor to bioinformatics was Elvin A. Kabat, who pioneered biological sequence analysis in 1970 with his comprehensive volumes of antibody sequences released online with Tai Te Wu between 1980 and 1991.
In the 1970s, new techniques for sequencing DNA were applied to bacteriophage MS2 and øX174, and the extended nucleotide sequences were then parsed with informational and statistical algorithms. These studies illustrated that well known features, such as the coding segments and the triplet code, are revealed in straightforward statistical analyses and were the proof of the concept that bioinformatics would be insightful.
Goals
In order to study how normal cellular activities are altered in different disease states, raw biological data must be combined to form a comprehensive picture of these activities. Therefore, the field of bioinformatics has evolved such that the most pressing task now involves the analysis and interpretation of various types of data. This also includes nucleotide and amino acid sequences, protein domains, and protein structures.
Important sub-disciplines within bioinformatics and computational biology include:
Development and implementation of computer programs to efficiently access, manage, and use various types of information.
Development of new mathematical algorithms and statistical measures to assess relationships among members of large data sets. For example, there are methods to locate a gene within a sequence, to predict protein structure and/or function, and to cluster protein sequences into families of related sequences.
The primary goal of bioinformatics is to increase the understanding of biological processes. What sets it apart from other approaches is its focus on developing and applying computationally intensive techniques to achieve this goal. Examples include: pattern recognition, data mining, machine learning algorithms, and visualization. Major research efforts in the field include sequence alignment, gene finding, genome assembly, drug design, drug discovery, protein structure alignment, protein structure prediction, prediction of gene expression and protein–protein interactions, genome-wide association studies, the modeling of evolution and cell division/mitosis.
Bioinformatics entails the creation and advancement of databases, algorithms, computational and statistical techniques, and theory to solve formal and practical problems arising from the management and analysis of biological data.
Over the past few decades, rapid developments in genomic and other molecular research technologies and developments in information technologies have combined to produce a tremendous amount of information related to molecular biology. Bioinformatics is the name given to these mathematical and computing approaches used to glean understanding of biological processes.
Common activities in bioinformatics include mapping and analyzing DNA and protein sequences, aligning DNA and protein sequences to compare them, and creating and viewing 3-D models of protein structures.
Sequence analysis
Since the bacteriophage Phage Φ-X174 was sequenced in 1977, the DNA sequences of thousands of organisms have been decoded and stored in databases. This sequence information is analyzed to determine genes that encode proteins, RNA genes, regulatory sequences, structural motifs, and repetitive sequences. A comparison of genes within a species or between different species can show similarities between protein functions, or relations between species (the use of molecular systematics to construct phylogenetic trees). With the growing amount of data, it long ago became impractical to analyze DNA sequences manually. Computer programs such as BLAST are used routinely to search sequences—as of 2008, from more than 260,000 organisms, containing over 190 billion nucleotides.
DNA sequencing
Before sequences can be analyzed, they are obtained from a data storage bank, such as GenBank. DNA sequencing is still a non-trivial problem as the raw data may be noisy or affected by weak signals. Algorithms have been developed for base calling for the various experimental approaches to DNA sequencing.
Sequence assembly
Most DNA sequencing techniques produce short fragments of sequence that need to be assembled to obtain complete gene or genome sequences. The shotgun sequencing technique (used by The Institute for Genomic Research (TIGR) to sequence the first bacterial genome, Haemophilus influenzae) generates the sequences of many thousands of small DNA fragments (ranging from 35 to 900 nucleotides long, depending on the sequencing technology). The ends of these fragments overlap and, when aligned properly by a genome assembly program, can be used to reconstruct the complete genome. Shotgun sequencing yields sequence data quickly, but the task of assembling the fragments can be quite complicated for larger genomes. For a genome as large as the human genome, it may take many days of CPU time on large-memory, multiprocessor computers to assemble the fragments, and the resulting assembly usually contains numerous gaps that must be filled in later. Shotgun sequencing is the method of choice for virtually all genomes sequenced (rather than chain-termination or chemical degradation methods), and genome assembly algorithms are a critical area of bioinformatics research.
Genome annotation
In genomics, annotation refers to the process of marking the stop and start regions of genes and other biological features in a sequenced DNA sequence. Many genomes are too large to be annotated by hand. As the rate of sequencing exceeds the rate of genome annotation, genome annotation has become the new bottleneck in bioinformatics.
Genome annotation can be classified into three levels: the nucleotide, protein, and process levels.
Gene finding is a chief aspect of nucleotide-level annotation. For complex genomes, a combination of ab initio gene prediction and sequence comparison with expressed sequence databases and other organisms can be successful. Nucleotide-level annotation also allows the integration of genome sequence with other genetic and physical maps of the genome.
The principal aim of protein-level annotation is to assign function to the protein products of the genome. Databases of protein sequences and functional domains and motifs are used for this type of annotation. About half of the predicted proteins in a new genome sequence tend to have no obvious function.
Understanding the function of genes and their products in the context of cellular and organismal physiology is the goal of process-level annotation. An obstacle of process-level annotation has been the inconsistency of terms used by different model systems. The Gene Ontology Consortium is helping to solve this problem.
The first description of a comprehensive annotation system was published in 1995 by The Institute for Genomic Research, which performed the first complete sequencing and analysis of the genome of a free-living (non-symbiotic) organism, the bacterium Haemophilus influenzae. The system identifies the genes encoding all proteins, transfer RNAs, ribosomal RNAs, in order to make initial functional assignments. The GeneMark program trained to find protein-coding genes in Haemophilus influenzae is constantly changing and improving.
Following the goals that the Human Genome Project left to achieve after its closure in 2003, the ENCODE project was developed by the National Human Genome Research Institute. This project is a collaborative data collection of the functional elements of the human genome that uses next-generation DNA-sequencing technologies and genomic tiling arrays, technologies able to automatically generate large amounts of data at a dramatically reduced per-base cost but with the same accuracy (base call error) and fidelity (assembly error).
Gene function prediction
While genome annotation is primarily based on sequence similarity (and thus homology), other properties of sequences can be used to predict the function of genes. In fact, most gene function prediction methods focus on protein sequences as they are more informative and more feature-rich. For instance, the distribution of hydrophobic amino acids predicts transmembrane segments in proteins. However, protein function prediction can also use external information such as gene (or protein) expression data, protein structure, or protein-protein interactions.
Computational evolutionary biology
Evolutionary biology is the study of the origin and descent of species, as well as their change over time. Informatics has assisted evolutionary biologists by enabling researchers to:
trace the evolution of a large number of organisms by measuring changes in their DNA, rather than through physical taxonomy or physiological observations alone,
compare entire genomes, which permits the study of more complex evolutionary events, such as gene duplication, horizontal gene transfer, and the prediction of factors important in bacterial speciation,
build complex computational population genetics models to predict the outcome of the system over time
track and share information on an increasingly large number of species and organisms
Future work endeavours to reconstruct the now more complex tree of life.
Comparative genomics
The core of comparative genome analysis is the establishment of the correspondence between genes (orthology analysis) or other genomic features in different organisms. Intergenomic maps are made to trace the evolutionary processes responsible for the divergence of two genomes. A multitude of evolutionary events acting at various organizational levels shape genome evolution. At the lowest level, point mutations affect individual nucleotides. At a higher level, large chromosomal segments undergo duplication, lateral transfer, inversion, transposition, deletion and insertion. Entire genomes are involved in processes of hybridization, polyploidization and endosymbiosis that lead to rapid speciation. The complexity of genome evolution poses many exciting challenges to developers of mathematical models and algorithms, who have recourse to a spectrum of algorithmic, statistical and mathematical techniques, ranging from exact, heuristics, fixed parameter and approximation algorithms for problems based on parsimony models to Markov chain Monte Carlo algorithms for Bayesian analysis of problems based on probabilistic models.
Many of these studies are based on the detection of sequence homology to assign sequences to protein families.
Pan genomics
Pan genomics is a concept introduced in 2005 by Tettelin and Medini. Pan genome is the complete gene repertoire of a particular monophyletic taxonomic group. Although initially applied to closely related strains of a species, it can be applied to a larger context like genus, phylum, etc. It is divided in two parts: the Core genome, a set of genes common to all the genomes under study (often housekeeping genes vital for survival), and the Dispensable/Flexible genome: a set of genes not present in all but one or some genomes under study. A bioinformatics tool BPGA can be used to characterize the Pan Genome of bacterial species.
Genetics of disease
As of 2013, the existence of efficient high-throughput next-generation sequencing technology allows for the identification of cause many different human disorders. Simple Mendelian inheritance has been observed for over 3,000 disorders that have been identified at the Online Mendelian Inheritance in Man database, but complex diseases are more difficult. Association studies have found many individual genetic regions that individually are weakly associated with complex diseases (such as infertility, breast cancer and Alzheimer's disease), rather than a single cause. There are currently many challenges to using genes for diagnosis and treatment, such as how we don't know which genes are important, or how stable the choices an algorithm provides.
Genome-wide association studies have successfully identified thousands of common genetic variants for complex diseases and traits; however, these common variants only explain a small fraction of heritability. Rare variants may account for some of the missing heritability. Large-scale whole genome sequencing studies have rapidly sequenced millions of whole genomes, and such studies have identified hundreds of millions of rare variants. Functional annotations predict the effect or function of a genetic variant and help to prioritize rare functional variants, and incorporating these annotations can effectively boost the power of genetic association of rare variants analysis of whole genome sequencing studies. Some tools have been developed to provide all-in-one rare variant association analysis for whole-genome sequencing data, including integration of genotype data and their functional annotations, association analysis, result summary and visualization. Meta-analysis of whole genome sequencing studies provides an attractive solution to the problem of collecting large sample sizes for discovering rare variants associated with complex phenotypes.
Analysis of mutations in cancer
In cancer, the genomes of affected cells are rearranged in complex or unpredictable ways. In addition to single-nucleotide polymorphism arrays identifying point mutations that cause cancer, oligonucleotide microarrays can be used to identify chromosomal gains and losses (called comparative genomic hybridization). These detection methods generate terabytes of data per experiment. The data is often found to contain considerable variability, or noise, and thus Hidden Markov model and change-point analysis methods are being developed to infer real copy number changes.
Two important principles can be used to identify cancer by mutations in the exome. First, cancer is a disease of accumulated somatic mutations in genes. Second, cancer contains driver mutations which need to be distinguished from passengers.
Further improvements in bioinformatics could allow for classifying types of cancer by analysis of cancer driven mutations in the genome. Furthermore, tracking of patients while the disease progresses may be possible in the future with the sequence of cancer samples. Another type of data that requires novel informatics development is the analysis of lesions found to be recurrent among many tumors.
Gene and protein expression
Analysis of gene expression
The expression of many genes can be determined by measuring mRNA levels with multiple techniques including microarrays, expressed cDNA sequence tag (EST) sequencing, serial analysis of gene expression (SAGE) tag sequencing, massively parallel signature sequencing (MPSS), RNA-Seq, also known as "Whole Transcriptome Shotgun Sequencing" (WTSS), or various applications of multiplexed in-situ hybridization. All of these techniques are extremely noise-prone and/or subject to bias in the biological measurement, and a major research area in computational biology involves developing statistical tools to separate signal from noise in high-throughput gene expression studies. Such studies are often used to determine the genes implicated in a disorder: one might compare microarray data from cancerous epithelial cells to data from non-cancerous cells to determine the transcripts that are up-regulated and down-regulated in a particular population of cancer cells.
Analysis of protein expression
Protein microarrays and high throughput (HT) mass spectrometry (MS) can provide a snapshot of the proteins present in a biological sample. The former approach faces similar problems as with microarrays targeted at mRNA, the latter involves the problem of matching large amounts of mass data against predicted masses from protein sequence databases, and the complicated statistical analysis of samples when multiple incomplete peptides from each protein are detected. Cellular protein localization in a tissue context can be achieved through affinity proteomics displayed as spatial data based on immunohistochemistry and tissue microarrays.
Analysis of regulation
Gene regulation is a complex process where a signal, such as an extracellular signal such as a hormone, eventually leads to an increase or decrease in the activity of one or more proteins. Bioinformatics techniques have been applied to explore various steps in this process.
For example, gene expression can be regulated by nearby elements in the genome. Promoter analysis involves the identification and study of sequence motifs in the DNA surrounding the protein-coding region of a gene. These motifs influence the extent to which that region is transcribed into mRNA. Enhancer elements far away from the promoter can also regulate gene expression, through three-dimensional looping interactions. These interactions can be determined by bioinformatic analysis of chromosome conformation capture experiments.
Expression data can be used to infer gene regulation: one might compare microarray data from a wide variety of states of an organism to form hypotheses about the genes involved in each state. In a single-cell organism, one might compare stages of the cell cycle, along with various stress conditions (heat shock, starvation, etc.). Clustering algorithms can be then applied to expression data to determine which genes are co-expressed. For example, the upstream regions (promoters) of co-expressed genes can be searched for over-represented regulatory elements. Examples of clustering algorithms applied in gene clustering are k-means clustering, self-organizing maps (SOMs), hierarchical clustering, and consensus clustering methods.
Analysis of cellular organization
Several approaches have been developed to analyze the location of organelles, genes, proteins, and other components within cells. A gene ontology category, cellular component, has been devised to capture subcellular localization in many biological databases.
Microscopy and image analysis
Microscopic pictures allow for the location of organelles as well as molecules, which may be the source of abnormalities in diseases.
Protein localization
Finding the location of proteins allows us to predict what they do. This is called protein function prediction. For instance, if a protein is found in the nucleus it may be involved in gene regulation or splicing. By contrast, if a protein is found in mitochondria, it may be involved in respiration or other metabolic processes. There are well developed protein subcellular localization prediction resources available, including protein subcellular location databases, and prediction tools.
Nuclear organization of chromatin
Data from high-throughput chromosome conformation capture experiments, such as Hi-C (experiment) and ChIA-PET, can provide information on the three-dimensional structure and nuclear organization of chromatin. Bioinformatic challenges in this field include partitioning the genome into domains, such as Topologically Associating Domains (TADs), that are organised together in three-dimensional space.
Structural bioinformatics
Finding the structure of proteins is an important application of bioinformatics. The Critical Assessment of Protein Structure Prediction (CASP) is an open competition where worldwide research groups submit protein models for evaluating unknown protein models.
Amino acid sequence
The linear amino acid sequence of a protein is called the primary structure. The primary structure can be easily determined from the sequence of codons on the DNA gene that codes for it. In most proteins, the primary structure uniquely determines the 3-dimensional structure of a protein in its native environment. An exception is the misfolded protein involved in bovine spongiform encephalopathy. This structure is linked to the function of the protein. Additional structural information includes the secondary, tertiary and quaternary structure. A viable general solution to the prediction of the function of a protein remains an open problem. Most efforts have so far been directed towards heuristics that work most of the time.
Homology
In the genomic branch of bioinformatics, homology is used to predict the function of a gene: if the sequence of gene A, whose function is known, is homologous to the sequence of gene B, whose function is unknown, one could infer that B may share A's function. In structural bioinformatics, homology is used to determine which parts of a protein are important in structure formation and interaction with other proteins. Homology modeling is used to predict the structure of an unknown protein from existing homologous proteins.
One example of this is hemoglobin in humans and the hemoglobin in legumes (leghemoglobin), which are distant relatives from the same protein superfamily. Both serve the same purpose of transporting oxygen in the organism. Although both of these proteins have completely different amino acid sequences, their protein structures are virtually identical, which reflects their near identical purposes and shared ancestor.
Other techniques for predicting protein structure include protein threading and de novo (from scratch) physics-based modeling.
Another aspect of structural bioinformatics include the use of protein structures for Virtual Screening models such as Quantitative Structure-Activity Relationship models and proteochemometric models (PCM). Furthermore, a protein's crystal structure can be used in simulation of for example ligand-binding studies and in silico mutagenesis studies.
A 2021 deep-learning algorithms-based software called AlphaFold, developed by Google's DeepMind, greatly outperforms all other prediction software methods, and has released predicted structures for hundreds of millions of proteins in the AlphaFold protein structure database.
Network and systems biology
Network analysis seeks to understand the relationships within biological networks such as metabolic or protein–protein interaction networks. Although biological networks can be constructed from a single type of molecule or entity (such as genes), network biology often attempts to integrate many different data types, such as proteins, small molecules, gene expression data, and others, which are all connected physically, functionally, or both.
Systems biology involves the use of computer simulations of cellular subsystems (such as the networks of metabolites and enzymes that comprise metabolism, signal transduction pathways and gene regulatory networks) to both analyze and visualize the complex connections of these cellular processes. Artificial life or virtual evolution attempts to understand evolutionary processes via the computer simulation of simple (artificial) life forms.
Molecular interaction networks
Tens of thousands of three-dimensional protein structures have been determined by X-ray crystallography and protein nuclear magnetic resonance spectroscopy (protein NMR) and a central question in structural bioinformatics is whether it is practical to predict possible protein–protein interactions only based on these 3D shapes, without performing protein–protein interaction experiments. A variety of methods have been developed to tackle the protein–protein docking problem, though it seems that there is still much work to be done in this field.
Other interactions encountered in the field include Protein–ligand (including drug) and protein–peptide. Molecular dynamic simulation of movement of atoms about rotatable bonds is the fundamental principle behind computational algorithms, termed docking algorithms, for studying molecular interactions.
Biodiversity informatics
Biodiversity informatics deals with the collection and analysis of biodiversity data, such as taxonomic databases, or microbiome data. Examples of such analyses include phylogenetics, niche modelling, species richness mapping, DNA barcoding, or species identification tools. A growing area is also macro-ecology, i.e. the study of how biodiversity is connected to ecology and human impact, such as climate change.
Others
Literature analysis
The enormous number of published literature makes it virtually impossible for individuals to read every paper, resulting in disjointed sub-fields of research. Literature analysis aims to employ computational and statistical linguistics to mine this growing library of text resources. For example:
Abbreviation recognition – identify the long-form and abbreviation of biological terms
Named-entity recognition – recognizing biological terms such as gene names
Protein–protein interaction – identify which proteins interact with which proteins from text
The area of research draws from statistics and computational linguistics.
High-throughput image analysis
Computational technologies are used to automate the processing, quantification and analysis of large amounts of high-information-content biomedical imagery. Modern image analysis systems can improve an observer's accuracy, objectivity, or speed. Image analysis is important for both diagnostics and research. Some examples are:
high-throughput and high-fidelity quantification and sub-cellular localization (high-content screening, cytohistopathology, Bioimage informatics)
morphometrics
clinical image analysis and visualization
determining the real-time air-flow patterns in breathing lungs of living animals
quantifying occlusion size in real-time imagery from the development of and recovery during arterial injury
making behavioral observations from extended video recordings of laboratory animals
infrared measurements for metabolic activity determination
inferring clone overlaps in DNA mapping, e.g. the Sulston score
High-throughput single cell data analysis
Computational techniques are used to analyse high-throughput, low-measurement single cell data, such as that obtained from flow cytometry. These methods typically involve finding populations of cells that are relevant to a particular disease state or experimental condition.
Ontologies and data integration
Biological ontologies are directed acyclic graphs of controlled vocabularies. They create categories for biological concepts and descriptions so they can be easily analyzed with computers. When categorised in this way, it is possible to gain added value from holistic and integrated analysis.
The OBO Foundry was an effort to standardise certain ontologies. One of the most widespread is the Gene ontology which describes gene function. There are also ontologies which describe phenotypes.
Databases
Databases are essential for bioinformatics research and applications. Databases exist for many different information types, including DNA and protein sequences, molecular structures, phenotypes and biodiversity. Databases can contain both empirical data (obtained directly from experiments) and predicted data (obtained from analysis of existing data). They may be specific to a particular organism, pathway or molecule of interest. Alternatively, they can incorporate data compiled from multiple other databases. Databases can have different formats, access mechanisms, and be public or private.
Some of the most commonly used databases are listed below:
Used in biological sequence analysis: Genbank, UniProt
Used in structure analysis: Protein Data Bank (PDB)
Used in finding Protein Families and Motif Finding: InterPro, Pfam
Used for Next Generation Sequencing: Sequence Read Archive
Used in Network Analysis: Metabolic Pathway Databases (KEGG, BioCyc), Interaction Analysis Databases, Functional Networks
Used in design of synthetic genetic circuits: GenoCAD
Software and tools
Software tools for bioinformatics include simple command-line tools, more complex graphical programs, and standalone web-services. They are made by bioinformatics companies or by public institutions.
Open-source bioinformatics software
Many free and open-source software tools have existed and continued to grow since the 1980s. The combination of a continued need for new algorithms for the analysis of emerging types of biological readouts, the potential for innovative in silico experiments, and freely available open code bases have created opportunities for research groups to contribute to both bioinformatics regardless of funding. The open source tools often act as incubators of ideas, or community-supported plug-ins in commercial applications. They may also provide de facto standards and shared object models for assisting with the challenge of bioinformation integration.
Open-source bioinformatics software includes Bioconductor, BioPerl, Biopython, BioJava, BioJS, BioRuby, Bioclipse, EMBOSS, .NET Bio, Orange with its bioinformatics add-on, Apache Taverna, UGENE and GenoCAD.
The non-profit Open Bioinformatics Foundation and the annual Bioinformatics Open Source Conference promote open-source bioinformatics software.
Web services in bioinformatics
SOAP- and REST-based interfaces have been developed to allow client computers to use algorithms, data and computing resources from servers in other parts of the world. The main advantage are that end users do not have to deal with software and database maintenance overheads.
Basic bioinformatics services are classified by the EBI into three categories: SSS (Sequence Search Services), MSA (Multiple Sequence Alignment), and BSA (Biological Sequence Analysis). The availability of these service-oriented bioinformatics resources demonstrate the applicability of web-based bioinformatics solutions, and range from a collection of standalone tools with a common data format under a single web-based interface, to integrative, distributed and extensible bioinformatics workflow management systems.
Bioinformatics workflow management systems
A bioinformatics workflow management system is a specialized form of a workflow management system designed specifically to compose and execute a series of computational or data manipulation steps, or a workflow, in a Bioinformatics application. Such systems are designed to
provide an easy-to-use environment for individual application scientists themselves to create their own workflows,
provide interactive tools for the scientists enabling them to execute their workflows and view their results in real-time,
simplify the process of sharing and reusing workflows between the scientists, and
enable scientists to track the provenance of the workflow execution results and the workflow creation steps.
Some of the platforms giving this service: Galaxy, Kepler, Taverna, UGENE, Anduril, HIVE.
BioCompute and BioCompute Objects
In 2014, the US Food and Drug Administration sponsored a conference held at the National Institutes of Health Bethesda Campus to discuss reproducibility in bioinformatics. Over the next three years, a consortium of stakeholders met regularly to discuss what would become BioCompute paradigm. These stakeholders included representatives from government, industry, and academic entities. Session leaders represented numerous branches of the FDA and NIH Institutes and Centers, non-profit entities including the Human Variome Project and the European Federation for Medical Informatics, and research institutions including Stanford, the New York Genome Center, and the George Washington University.
It was decided that the BioCompute paradigm would be in the form of digital 'lab notebooks' which allow for the reproducibility, replication, review, and reuse, of bioinformatics protocols. This was proposed to enable greater continuity within a research group over the course of normal personnel flux while furthering the exchange of ideas between groups. The US FDA funded this work so that information on pipelines would be more transparent and accessible to their regulatory staff.
In 2016, the group reconvened at the NIH in Bethesda and discussed the potential for a BioCompute Object, an instance of the BioCompute paradigm. This work was copied as both a "standard trial use" document and a preprint paper uploaded to bioRxiv. The BioCompute object allows for the JSON-ized record to be shared among employees, collaborators, and regulators.
Education platforms
Bioinformatics is not only taught as in-person masters degree at many universities. The computational nature of bioinformatics lends it to computer-aided and online learning. Software platforms designed to teach bioinformatics concepts and methods include Rosalind and online courses offered through the Swiss Institute of Bioinformatics Training Portal. The Canadian Bioinformatics Workshops provides videos and slides from training workshops on their website under a Creative Commons license. The 4273π project or 4273pi project also offers open source educational materials for free. The course runs on low cost Raspberry Pi computers and has been used to teach adults and school pupils. 4273 is actively developed by a consortium of academics and research staff who have run research level bioinformatics using Raspberry Pi computers and the 4273π operating system.
MOOC platforms also provide online certifications in bioinformatics and related disciplines, including Coursera's Bioinformatics Specialization at the University of California, San Diego, Genomic Data Science Specialization at Johns Hopkins University, and EdX's Data Analysis for Life Sciences XSeries at Harvard University.
Conferences
There are several large conferences that are concerned with bioinformatics. Some of the most notable examples are Intelligent Systems for Molecular Biology (ISMB), European Conference on Computational Biology (ECCB), and Research in Computational Molecular Biology (RECOMB).
See also
References
Further reading
Sehgal et al. : Structural, phylogenetic and docking studies of D-amino acid oxidase activator(DAOA ), a candidate schizophrenia gene. Theoretical Biology and Medical Modelling 2013 10 :3.
Achuthsankar S Nair Computational Biology & Bioinformatics – A gentle Overview , Communications of Computer Society of India, January 2007
Aluru, Srinivas, ed. Handbook of Computational Molecular Biology. Chapman & Hall/Crc, 2006. (Chapman & Hall/Crc Computer and Information Science Series)
Baldi, P and Brunak, S, Bioinformatics: The Machine Learning Approach, 2nd edition. MIT Press, 2001.
Barnes, M.R. and Gray, I.C., eds., Bioinformatics for Geneticists, first edition. Wiley, 2003.
Baxevanis, A.D. and Ouellette, B.F.F., eds., Bioinformatics: A Practical Guide to the Analysis of Genes and Proteins, third edition. Wiley, 2005.
Baxevanis, A.D., Petsko, G.A., Stein, L.D., and Stormo, G.D., eds., Current Protocols in Bioinformatics. Wiley, 2007.
Cristianini, N. and Hahn, M. Introduction to Computational Genomics , Cambridge University Press, 2006. ( |)
Durbin, R., S. Eddy, A. Krogh and G. Mitchison, Biological sequence analysis. Cambridge University Press, 1998.
Keedwell, E., Intelligent Bioinformatics: The Application of Artificial Intelligence Techniques to Bioinformatics Problems. Wiley, 2005.
Kohane, et al. Microarrays for an Integrative Genomics. The MIT Press, 2002.
Lund, O. et al. Immunological Bioinformatics. The MIT Press, 2005.
Pachter, Lior and Sturmfels, Bernd. "Algebraic Statistics for Computational Biology" Cambridge University Press, 2005.
Pevzner, Pavel A. Computational Molecular Biology: An Algorithmic Approach The MIT Press, 2000.
Soinov, L. Bioinformatics and Pattern Recognition Come Together Journal of Pattern Recognition Research (JPRR ), Vol 1 (1) 2006 p. 37–41
Stevens, Hallam, Life Out of Sequence: A Data-Driven History of Bioinformatics, Chicago: The University of Chicago Press, 2013,
Tisdall, James. "Beginning Perl for Bioinformatics" O'Reilly, 2001.
Catalyzing Inquiry at the Interface of Computing and Biology (2005) CSTB report
Calculating the Secrets of Life: Contributions of the Mathematical Sciences and computing to Molecular Biology (1995)
Foundations of Computational and Systems Biology MIT Course
Computational Biology: Genomes, Networks, Evolution Free MIT Course
External links
Bioinformatics Resource Portal (SIB) | 0.7938 | 0.99827 | 0.792427 |
Analytical chemistry | Analytical chemistry studies and uses instruments and methods to separate, identify, and quantify matter. In practice, separation, identification or quantification may constitute the entire analysis or be combined with another method. Separation isolates analytes. Qualitative analysis identifies analytes, while quantitative analysis determines the numerical amount or concentration.
Analytical chemistry consists of classical, wet chemical methods and modern, instrumental methods. Classical qualitative methods use separations such as precipitation, extraction, and distillation. Identification may be based on differences in color, odor, melting point, boiling point, solubility, radioactivity or reactivity. Classical quantitative analysis uses mass or volume changes to quantify amount. Instrumental methods may be used to separate samples using chromatography, electrophoresis or field flow fractionation. Then qualitative and quantitative analysis can be performed, often with the same instrument and may use light interaction, heat interaction, electric fields or magnetic fields. Often the same instrument can separate, identify and quantify an analyte.
Analytical chemistry is also focused on improvements in experimental design, chemometrics, and the creation of new measurement tools. Analytical chemistry has broad applications to medicine, science, and engineering.
History
Analytical chemistry has been important since the early days of chemistry, providing methods for determining which elements and chemicals are present in the object in question. During this period, significant contributions to analytical chemistry included the development of systematic elemental analysis by Justus von Liebig and systematized organic analysis based on the specific reactions of functional groups.
The first instrumental analysis was flame emissive spectrometry developed by Robert Bunsen and Gustav Kirchhoff who discovered rubidium (Rb) and caesium (Cs) in 1860.
Most of the major developments in analytical chemistry took place after 1900. During this period, instrumental analysis became progressively dominant in the field. In particular, many of the basic spectroscopic and spectrometric techniques were discovered in the early 20th century and refined in the late 20th century.
The separation sciences follow a similar time line of development and also became increasingly transformed into high performance instruments. In the 1970s many of these techniques began to be used together as hybrid techniques to achieve a complete characterization of samples.
Starting in the 1970s, analytical chemistry became progressively more inclusive of biological questions (bioanalytical chemistry), whereas it had previously been largely focused on inorganic or small organic molecules. Lasers have been increasingly used as probes and even to initiate and influence a wide variety of reactions. The late 20th century also saw an expansion of the application of analytical chemistry from somewhat academic chemical questions to forensic, environmental, industrial and medical questions, such as in histology.
Modern analytical chemistry is dominated by instrumental analysis. Many analytical chemists focus on a single type of instrument. Academics tend to either focus on new applications and discoveries or on new methods of analysis. The discovery of a chemical present in blood that increases the risk of cancer would be a discovery that an analytical chemist might be involved in. An effort to develop a new method might involve the use of a tunable laser to increase the specificity and sensitivity of a spectrometric method. Many methods, once developed, are kept purposely static so that data can be compared over long periods of time. This is particularly true in industrial quality assurance (QA), forensic and environmental applications. Analytical chemistry plays an increasingly important role in the pharmaceutical industry where, aside from QA, it is used in the discovery of new drug candidates and in clinical applications where understanding the interactions between the drug and the patient are critical.
Classical methods
Although modern analytical chemistry is dominated by sophisticated instrumentation, the roots of analytical chemistry and some of the principles used in modern instruments are from traditional techniques, many of which are still used today. These techniques also tend to form the backbone of most undergraduate analytical chemistry educational labs.
Qualitative analysis
Qualitative analysis determines the presence or absence of a particular compound, but not the mass or concentration. By definition, qualitative analyses do not measure quantity.
Chemical tests
There are numerous qualitative chemical tests, for example, the acid test for gold and the Kastle-Meyer test for the presence of blood.
Flame test
Inorganic qualitative analysis generally refers to a systematic scheme to confirm the presence of certain aqueous ions or elements by performing a series of reactions that eliminate a range of possibilities and then confirm suspected ions with a confirming test. Sometimes small carbon-containing ions are included in such schemes. With modern instrumentation, these tests are rarely used but can be useful for educational purposes and in fieldwork or other situations where access to state-of-the-art instruments is not available or expedient.
Quantitative analysis
Quantitative analysis is the measurement of the quantities of particular chemical constituents present in a substance. Quantities can be measured by mass (gravimetric analysis) or volume (volumetric analysis).
Gravimetric analysis
The gravimetric analysis involves determining the amount of material present by weighing the sample before and/or after some transformation. A common example used in undergraduate education is the determination of the amount of water in a hydrate by heating the sample to remove the water such that the difference in weight is due to the loss of water.
Volumetric analysis
Titration involves the gradual addition of a measurable reactant to an exact volume of a solution being analyzed until some equivalence point is reached. Titrating accurately to either the half-equivalence point or the endpoint of a titration allows the chemist to determine the amount of moles used, which can then be used to determine a concentration or composition of the titrant. Most familiar to those who have taken chemistry during secondary education is the acid-base titration involving a color-changing indicator, such as phenolphthalein. There are many other types of titrations, for example, potentiometric titrations or precipitation titrations. Chemists might also create titration curves in order by systematically testing the pH every drop in order to understand different properties of the titrant.
Instrumental methods
Spectroscopy
Spectroscopy measures the interaction of the molecules with electromagnetic radiation. Spectroscopy consists of many different applications such as atomic absorption spectroscopy, atomic emission spectroscopy, ultraviolet-visible spectroscopy, X-ray spectroscopy, fluorescence spectroscopy, infrared spectroscopy, Raman spectroscopy, dual polarization interferometry, nuclear magnetic resonance spectroscopy, photoemission spectroscopy, Mössbauer spectroscopy and so on.
Mass spectrometry
Mass spectrometry measures mass-to-charge ratio of molecules using electric and magnetic fields. There are several ionization methods: electron ionization, chemical ionization, electrospray ionization, fast atom bombardment, matrix assisted laser desorption/ionization, and others. Also, mass spectrometry is categorized by approaches of mass analyzers: magnetic-sector, quadrupole mass analyzer, quadrupole ion trap, time-of-flight, Fourier transform ion cyclotron resonance, and so on.
Electrochemical analysis
Electroanalytical methods measure the potential (volts) and/or current (amps) in an electrochemical cell containing the analyte. These methods can be categorized according to which aspects of the cell are controlled and which are measured. The four main categories are potentiometry (the difference in electrode potentials is measured), coulometry (the transferred charge is measured over time), amperometry (the cell's current is measured over time), and voltammetry (the cell's current is measured while actively altering the cell's potential).
Thermal analysis
Calorimetry and thermogravimetric analysis measure the interaction of a material and heat.
Separation
Separation processes are used to decrease the complexity of material mixtures. Chromatography, electrophoresis and field flow fractionation are representative of this field.
Chromatographic assays
Chromatography can be used to determine the presence of substances in a sample as different components in a mixture have different tendencies to adsorb onto the stationary phase or dissolve in the mobile phase. Thus, different components of the mixture move at different speed. Different components of a mixture can therefore be identified by their respective Rƒ values, which is the ratio between the migration distance of the substance and the migration distance of the solvent front during chromatography.
In combination with the instrumental methods, chromatography can be used in quantitative determination of the substances.
Hybrid techniques
Combinations of the above techniques produce a "hybrid" or "hyphenated" technique. Several examples are in popular use today and new hybrid techniques are under development. For example, gas chromatography-mass spectrometry, gas chromatography-infrared spectroscopy, liquid chromatography-mass spectrometry, liquid chromatography-NMR spectroscopy, liquid chromatography-infrared spectroscopy, and capillary electrophoresis-mass spectrometry.
Hyphenated separation techniques refer to a combination of two (or more) techniques to detect and separate chemicals from solutions. Most often the other technique is some form of chromatography. Hyphenated techniques are widely used in chemistry and biochemistry. A slash is sometimes used instead of hyphen, especially if the name of one of the methods contains a hyphen itself.
Microscopy
The visualization of single molecules, single cells, biological tissues, and nanomaterials is an important and attractive approach in analytical science. Also, hybridization with other traditional analytical tools is revolutionizing analytical science. Microscopy can be categorized into three different fields: optical microscopy, electron microscopy, and scanning probe microscopy. Recently, this field is rapidly progressing because of the rapid development of the computer and camera industries.
Lab-on-a-chip
Devices that integrate (multiple) laboratory functions on a single chip of only millimeters to a few square centimeters in size and that are capable of handling extremely small fluid volumes down to less than picoliters.
Errors
Error can be defined as numerical difference between observed value and true value. The experimental error can be divided into two types, systematic error and random error. Systematic error results from a flaw in equipment or the design of an experiment while random error results from uncontrolled or uncontrollable variables in the experiment.
In error the true value and observed value in chemical analysis can be related with each other by the equation
where
is the absolute error.
is the true value.
is the observed value.
An error of a measurement is an inverse measure of accurate measurement, i.e. smaller the error greater the accuracy of the measurement.
Errors can be expressed relatively. Given the relative error():
The percent error can also be calculated:
If we want to use these values in a function, we may also want to calculate the error of the function. Let be a function with variables. Therefore, the propagation of uncertainty must be calculated in order to know the error in :
Standards
Standard curve
A general method for analysis of concentration involves the creation of a calibration curve. This allows for the determination of the amount of a chemical in a material by comparing the results of an unknown sample to those of a series of known standards. If the concentration of element or compound in a sample is too high for the detection range of the technique, it can simply be diluted in a pure solvent. If the amount in the sample is below an instrument's range of measurement, the method of addition can be used. In this method, a known quantity of the element or compound under study is added, and the difference between the concentration added and the concentration observed is the amount actually in the sample.
Internal standards
Sometimes an internal standard is added at a known concentration directly to an analytical sample to aid in quantitation. The amount of analyte present is then determined relative to the internal standard as a calibrant. An ideal internal standard is an isotopically enriched analyte which gives rise to the method of isotope dilution.
Standard addition
The method of standard addition is used in instrumental analysis to determine the concentration of a substance (analyte) in an unknown sample by comparison to a set of samples of known concentration, similar to using a calibration curve. Standard addition can be applied to most analytical techniques and is used instead of a calibration curve to solve the matrix effect problem.
Signals and noise
One of the most important components of analytical chemistry is maximizing the desired signal while minimizing the associated noise. The analytical figure of merit is known as the signal-to-noise ratio (S/N or SNR).
Noise can arise from environmental factors as well as from fundamental physical processes.
Thermal noise
Thermal noise results from the motion of charge carriers (usually electrons) in an electrical circuit generated by their thermal motion. Thermal noise is white noise meaning that the power spectral density is constant throughout the frequency spectrum.
The root mean square value of the thermal noise in a resistor is given by
where kB is the Boltzmann constant, T is the temperature, R is the resistance, and is the bandwidth of the frequency .
Shot noise
Shot noise is a type of electronic noise that occurs when the finite number of particles (such as electrons in an electronic circuit or photons in an optical device) is small enough to give rise to statistical fluctuations in a signal.
Shot noise is a Poisson process, and the charge carriers that make up the current follow a Poisson distribution. The root mean square current fluctuation is given by
where e is the elementary charge and I is the average current. Shot noise is white noise.
Flicker noise
Flicker noise is electronic noise with a 1/ƒ frequency spectrum; as f increases, the noise decreases. Flicker noise arises from a variety of sources, such as impurities in a conductive channel, generation, and recombination noise in a transistor due to base current, and so on. This noise can be avoided by modulation of the signal at a higher frequency, for example, through the use of a lock-in amplifier.
Environmental noise
Environmental noise arises from the surroundings of the analytical instrument. Sources of electromagnetic noise are power lines, radio and television stations, wireless devices, compact fluorescent lamps and electric motors. Many of these noise sources are narrow bandwidth and, therefore, can be avoided. Temperature and vibration isolation may be required for some instruments.
Noise reduction
Noise reduction can be accomplished either in computer hardware or software. Examples of hardware noise reduction are the use of shielded cable, analog filtering, and signal modulation. Examples of software noise reduction are digital filtering, ensemble average, boxcar average, and correlation methods.
Applications
Analytical chemistry has applications including in forensic science, bioanalysis, clinical analysis, environmental analysis, and materials analysis. Analytical chemistry research is largely driven by performance (sensitivity, detection limit, selectivity, robustness, dynamic range, linear range, accuracy, precision, and speed), and cost (purchase, operation, training, time, and space). Among the main branches of contemporary analytical atomic spectrometry, the most widespread and universal are optical and mass spectrometry. In the direct elemental analysis of solid samples, the new leaders are laser-induced breakdown and laser ablation mass spectrometry, and the related techniques with transfer of the laser ablation products into inductively coupled plasma. Advances in design of diode lasers and optical parametric oscillators promote developments in fluorescence and ionization spectrometry and also in absorption techniques where uses of optical cavities for increased effective absorption pathlength are expected to expand. The use of plasma- and laser-based methods is increasing. An interest towards absolute (standardless) analysis has revived, particularly in emission spectrometry.
Great effort is being put into shrinking the analysis techniques to chip size. Although there are few examples of such systems competitive with traditional analysis techniques, potential advantages include size/portability, speed, and cost. (micro total analysis system (μTAS) or lab-on-a-chip). Microscale chemistry reduces the amounts of chemicals used.
Many developments improve the analysis of biological systems. Examples of rapidly expanding fields in this area are genomics, DNA sequencing and related research in genetic fingerprinting and DNA microarray; proteomics, the analysis of protein concentrations and modifications, especially in response to various stressors, at various developmental stages, or in various parts of the body, metabolomics, which deals with metabolites; transcriptomics, including mRNA and associated fields; lipidomics - lipids and its associated fields; peptidomics - peptides and its associated fields; and metallomics, dealing with metal concentrations and especially with their binding to proteins and other molecules.
Analytical chemistry has played a critical role in the understanding of basic science to a variety of practical applications, such as biomedical applications, environmental monitoring, quality control of industrial manufacturing, forensic science, and so on.
The recent developments in computer automation and information technologies have extended analytical chemistry into a number of new biological fields. For example, automated DNA sequencing machines were the basis for completing human genome projects leading to the birth of genomics. Protein identification and peptide sequencing by mass spectrometry opened a new field of proteomics. In addition to automating specific processes, there is effort to automate larger sections of lab testing, such as in companies like Emerald Cloud Lab and Transcriptic.
Analytical chemistry has been an indispensable area in the development of nanotechnology. Surface characterization instruments, electron microscopes and scanning probe microscopes enable scientists to visualize atomic structures with chemical characterizations.
See also
Analytical techniques
Important publications in analytical chemistry
List of chemical analysis methods
List of materials analysis methods
Measurement uncertainty
Metrology
Sensory analysis - in the field of Food science
Virtual instrumentation
Microanalysis
Quality of analytical results
Working range
References
Further reading
Gurdeep, Chatwal Anand (2008). Instrumental Methods of Chemical Analysis Himalaya Publishing House (India)
Ralph L. Shriner, Reynold C. Fuson, David Y. Curtin, Terence C. Morill: The systematic identification of organic compounds - a laboratory manual, Verlag Wiley, New York 1980, 6. edition, .
Bettencourt da Silva, R; Bulska, E; Godlewska-Zylkiewicz, B; Hedrich, M; Majcen, N; Magnusson, B; Marincic, S; Papadakis, I; Patriarca, M; Vassileva, E; Taylor, P; Analytical measurement: measurement uncertainty and statistics, 2012, .
External links
Infografik and animation showing the progress of analytical chemistry
aas Atomic Absorption Spectrophotometer
Materials science | 0.795262 | 0.996283 | 0.792306 |
Bottom–up and top–down design | Bottom–up and top–down are both strategies of information processing and ordering knowledge, used in a variety of fields including software, humanistic and scientific theories (see systemics), and management and organization. In practice they can be seen as a style of thinking, teaching, or leadership.
A top–down approach (also known as stepwise design and stepwise refinement and in some cases used as a synonym of decomposition) is essentially the breaking down of a system to gain insight into its compositional subsystems in a reverse engineering fashion. In a top–down approach an overview of the system is formulated, specifying, but not detailing, any first-level subsystems. Each subsystem is then refined in yet greater detail, sometimes in many additional subsystem levels, until the entire specification is reduced to base elements. A top–down model is often specified with the assistance of black boxes, which makes it easier to manipulate. However black boxes may fail to clarify elementary mechanisms or be detailed enough to realistically validate the model. A top–down approach starts with the big picture, then breaks down into smaller segments.
A bottom–up approach is the piecing together of systems to give rise to more complex systems, thus making the original systems subsystems of the emergent system. Bottom–up processing is a type of information processing based on incoming data from the environment to form a perception. From a cognitive psychology perspective, information enters the eyes in one direction (sensory input, or the "bottom"), and is then turned into an image by the brain that can be interpreted and recognized as a perception (output that is "built up" from processing to final cognition). In a bottom–up approach the individual base elements of the system are first specified in great detail. These elements are then linked together to form larger subsystems, which then in turn are linked, sometimes in many levels, until a complete top-level system is formed. This strategy often resembles a "seed" model, by which the beginnings are small but eventually grow in complexity and completeness. But "organic strategies" may result in a tangle of elements and subsystems, developed in isolation and subject to local optimization as opposed to meeting a global purpose.
Product design and development
During the development of new products, designers and engineers rely on both bottom–up and top–down approaches. The bottom–up approach is being used when off-the-shelf or existing components are selected and integrated into the product. An example includes selecting a particular fastener, such as a bolt, and designing the receiving components such that the fastener will fit properly. In a top–down approach, a custom fastener would be designed such that it would fit properly in the receiving components. For perspective, for a product with more restrictive requirements (such as weight, geometry, safety, environment), such as a spacesuit, a more top–down approach is taken and almost everything is custom designed.
Computer science
Software development
Part of this section is from the Perl Design Patterns Book.
In the software development process, the top–down and bottom–up approaches play a key role.
Top–down approaches emphasize planning and a complete understanding of the system. It is inherent that no coding can begin until a sufficient level of detail has been reached in the design of at least some part of the system. Top–down approaches are implemented by attaching the stubs in place of the module. But these delay testing of the ultimate functional units of a system until significant design is complete.
Bottom–up emphasizes coding and early testing, which can begin as soon as the first module has been specified. But this approach runs the risk that modules may be coded without having a clear idea of how they link to other parts of the system, and that such linking may not be as easy as first thought. Re-usability of code is one of the main benefits of a bottom–up approach.
Top–down design was promoted in the 1970s by IBM researchers Harlan Mills and Niklaus Wirth. Mills developed structured programming concepts for practical use and tested them in a 1969 project to automate the New York Times morgue index. The engineering and management success of this project led to the spread of the top–down approach through IBM and the rest of the computer industry. Among other achievements, Niklaus Wirth, the developer of Pascal programming language, wrote the influential paper Program Development by Stepwise Refinement. Since Niklaus Wirth went on to develop languages such as Modula and Oberon (where one could define a module before knowing about the entire program specification), one can infer that top–down programming was not strictly what he promoted. Top–down methods were favored in software engineering until the late 1980s, and object-oriented programming assisted in demonstrating the idea that both aspects of top-down and bottom-up programming could be used.
Modern software design approaches usually combine top–down and bottom–up approaches. Although an understanding of the complete system is usually considered necessary for good design—leading theoretically to a top-down approach—most software projects attempt to make use of existing code to some degree. Pre-existing modules give designs a bottom–up flavor.
Programming
Top–down is a programming style, the mainstay of traditional procedural languages, in which design begins by specifying complex pieces and then dividing them into successively smaller pieces. The technique for writing a program using top–down methods is to write a main procedure that names all the major functions it will need. Later, the programming team looks at the requirements of each of those functions and the process is repeated. These compartmentalized subroutines eventually will perform actions so simple they can be easily and concisely coded. When all the various subroutines have been coded the program is ready for testing. By defining how the application comes together at a high level, lower-level work can be self-contained.
In a bottom–up approach the individual base elements of the system are first specified in great detail. These elements are then linked together to form larger subsystems, which in turn are linked, sometimes at many levels, until a complete top–level system is formed. This strategy often resembles a "seed" model, by which the beginnings are small, but eventually grow in complexity and completeness. Object-oriented programming (OOP) is a paradigm that uses "objects" to design applications and computer programs. In mechanical engineering with software programs such as Pro/ENGINEER, Solidworks, and Autodesk Inventor users can design products as pieces not part of the whole and later add those pieces together to form assemblies like building with Lego. Engineers call this "piece part design".
Parsing
Parsing is the process of analyzing an input sequence (such as that read from a file or a keyboard) in order to determine its grammatical structure. This method is used in the analysis of both natural languages and computer languages, as in a compiler.
Nanotechnology
Top–down and bottom–up are two approaches for the manufacture of products. These terms were first applied to the field of nanotechnology by the Foresight Institute in 1989 to distinguish between molecular manufacturing (to mass-produce large atomically precise objects) and conventional manufacturing (which can mass-produce large objects that are not atomically precise). Bottom–up approaches seek to have smaller (usually molecular) components built up into more complex assemblies, while top–down approaches seek to create nanoscale devices by using larger, externally controlled ones to direct their assembly. Certain valuable nanostructures, such as Silicon nanowires, can be fabricated using either approach, with processing methods selected on the basis of targeted applications.
A top–down approach often uses the traditional workshop or microfabrication methods where externally controlled tools are used to cut, mill, and shape materials into the desired shape and order. Micropatterning techniques, such as photolithography and inkjet printing belong to this category. Vapor treatment can be regarded as a new top–down secondary approaches to engineer nanostructures.
Bottom–up approaches, in contrast, use the chemical properties of single molecules to cause single-molecule components to (a) self-organize or self-assemble into some useful conformation, or (b) rely on positional assembly. These approaches use the concepts of molecular self-assembly and/or molecular recognition. See also Supramolecular chemistry. Such bottom–up approaches should, broadly speaking, be able to produce devices in parallel and much cheaper than top–down methods but could potentially be overwhelmed as the size and complexity of the desired assembly increases.
Neuroscience and psychology
These terms are also employed in cognitive sciences including neuroscience, cognitive neuroscience and cognitive psychology to discuss the flow of information in processing. Typically, sensory input is considered bottom–up, and higher cognitive processes, which have more information from other sources, are considered top–down. A bottom-up process is characterized by an absence of higher-level direction in sensory processing, whereas a top-down process is characterized by a high level of direction of sensory processing by more cognition, such as goals or targets (Biederman, 19).
According to college teaching notes written by Charles Ramskov, Irvin Rock, Neiser, and Richard Gregory claim that top–down approach involves perception that is an active and constructive process. Additionally, it is an approach not directly given by stimulus input, but is the result of stimulus, internal hypotheses, and expectation interactions. According to theoretical synthesis, "when a stimulus is presented short and clarity is uncertain that gives a vague stimulus, perception becomes a top-down approach."
Conversely, psychology defines bottom–up processing as an approach in which there is a progression from the individual elements to the whole. According to Ramskov, one proponent of bottom–up approach, Gibson, claims that it is a process that includes visual perception that needs information available from proximal stimulus produced by the distal stimulus. Theoretical synthesis also claims that bottom–up processing occurs "when a stimulus is presented long and clearly enough."
Certain cognitive processes, such as fast reactions or quick visual identification, are considered bottom–up processes because they rely primarily on sensory information, whereas processes such as motor control and directed attention are considered top–down because they are goal directed. Neurologically speaking, some areas of the brain, such as area V1 mostly have bottom–up connections. Other areas, such as the fusiform gyrus have inputs from higher brain areas and are considered to have top–down influence.
The study of visual attention is an example. If your attention is drawn to a flower in a field, it may be because the color or shape of the flower are visually salient. The information that caused you to attend to the flower came to you in a bottom–up fashion—your attention was not contingent on knowledge of the flower: the outside stimulus was sufficient on its own. Contrast this situation with one in which you are looking for a flower. You have a representation of what you are looking for. When you see the object, you are looking for, it is salient. This is an example of the use of top–down information.
In cognition, two thinking approaches are distinguished. "Top–down" (or "big chunk") is stereotypically the visionary, or the person who sees the larger picture and overview. Such people focus on the big picture and from that derive the details to support it. "Bottom–up" (or "small chunk") cognition is akin to focusing on the detail primarily, rather than the landscape. The expression "seeing the wood for the trees" references the two styles of cognition.
Studies in task switching and response selection show that there are differences through the two types of processing. Top–down processing primarily focuses on the attention side, such as task repetition (Schneider, 2015). Bottom–up processing focuses on item-based learning, such as finding the same object over and over again (Schneider, 2015). Implications for understanding attentional control of response selection in conflict situations are discussed (Schneider, 2015).
This also applies to how we structure these processing neurologically. With structuring information interfaces in our neurological processes for procedural learning. These processes were proven effective to work in our interface design. But although both top–down principles were effective in guiding interface design; they were not sufficient. They can be combined with iterative bottom–up methods to produce usable interfaces (Zacks & Tversky, 2003).
Schooling
Undergraduate (or bachelor) students are taught the basis of top–down bottom–up processing around their third year in the program. Going through four main parts of the processing when viewing it from a learning perspective. The two main definitions are that bottom–up processing is determined directly by environmental stimuli rather than the individual's knowledge and expectations (Koch, 2022).
Management and organization
In the fields of management and organization, the terms "top–down" and "bottom–up" are used to describe how decisions are made and/or how change is implemented.
A "top–down" approach is where an executive decision maker or other top person makes the decisions of how something should be done. This approach is disseminated under their authority to lower levels in the hierarchy, who are, to a greater or lesser extent, bound by them. For example, when wanting to make an improvement in a hospital, a hospital administrator might decide that a major change (such as implementing a new program) is needed, and then use a planned approach to drive the changes down to the frontline staff.
A bottom–up approach to changes is one that works from the grassroots, and originates in a flat structure with people working together, causing a decision to arise from their joint involvement. A decision by a number of activists, students, or victims of some incident to take action is a "bottom–up" decision. A bottom–up approach can be thought of as "an incremental change approach that represents an emergent process cultivated and upheld primarily by frontline workers".
Positive aspects of top–down approaches include their efficiency and superb overview of higher levels; and external effects can be internalized. On the negative side, if reforms are perceived to be imposed "from above", it can be difficult for lower levels to accept them (e.g., Bresser-Pereira, Maravall, and Przeworski 1993). Evidence suggests this to be true regardless of the content of reforms (e.g., Dubois 2002). A bottom–up approach allows for more experimentation and a better feeling for what is needed at the bottom. Other evidence suggests that there is a third combination approach to change.
Public health
Both top–down and bottom–up approaches are used in public health. There are many examples of top–down programs, often run by governments or large inter-governmental organizations; many of these are disease-or issue-specific, such as HIV control or smallpox eradication. Examples of bottom–up programs include many small NGOs set up to improve local access to healthcare. But many programs seek to combine both approaches; for instance, guinea worm eradication, a single-disease international program currently run by the Carter Center has involved the training of many local volunteers, boosting bottom-up capacity, as have international programs for hygiene, sanitation, and access to primary healthcare.
Architecture
Often the École des Beaux-Arts school of design is said to have primarily promoted top–down design because it taught that an architectural design should begin with a parti, a basic plan drawing of the overall project.
By contrast, the Bauhaus focused on bottom–up design. This method manifested itself in the study of translating small-scale organizational systems to a larger, more architectural scale (as with the wood panel carving and furniture design).
Ecology
In ecology top–down control refers to when a top predator controls the structure or population dynamics of the ecosystem. The interactions between these top predators and their prey are what influences lower trophic levels. Changes in the top level of trophic levels have an inverse effect on the lower trophic levels. Top–down control can have negative effects on the surrounding ecosystem if there is a drastic change in the number of predators. The classic example is of kelp forest ecosystems. In such ecosystems, sea otters are a keystone predator. They prey on urchins, which in turn eat kelp. When otters are removed, urchin populations grow and reduce the kelp forest creating urchin barrens. This reduces the diversity of the ecosystem as a whole and can have detrimental effects on all of the other organisms. In other words, such ecosystems are not controlled by productivity of the kelp, but rather, a top predator. One can see the inverse effect that top–down control has in this example; when the population of otters decreased, the population of the urchins increased.
Bottom–up control in ecosystems refers to ecosystems in which the nutrient supply, productivity, and type of primary producers (plants and phytoplankton) control the ecosystem structure. If there are not enough resources or producers in the ecosystem, there is not enough energy left for the rest of the animals in the food chain because of biomagnification and ecological efficiency. An example would be how plankton populations are controlled by the availability of nutrients. Plankton populations tend to be higher and more complex in areas where upwelling brings nutrients to the surface.
There are many different examples of these concepts. It is common for populations to be influenced by both types of control, and there are still debates going on as to which type of control affects food webs in certain ecosystems.
Philosophy and ethics
Top–down reasoning in ethics is when the reasoner starts from abstract universalizable principles and then reasons down them to particular situations. Bottom–up reasoning occurs when the reasoner starts from intuitive particular situational judgements and then reasons up to principles. Reflective equilibrium occurs when there is interaction between top-down and bottom-up reasoning until both are in harmony. That is to say, when universalizable abstract principles are reflectively found to be in equilibrium with particular intuitive judgements. The process occurs when cognitive dissonance occurs when reasoners try to resolve top–down with bottom–up reasoning, and adjust one or the other, until they are satisfied, they have found the best combinations of principles and situational judgements.
See also
The Cathedral and the Bazaar
Pseudocode
References cited
https://philpapers.org/rec/COHTNO
Citations and notes
Further reading
Corpeño, E (2021). "The Top-Down Approach to Problem Solving: How to Stop Struggling in Class and Start Learning". .
Goldstein, E.B. (2010). Sensation and Perception. USA: Wadsworth.
Galotti, K. (2008). Cognitive Psychology: In and out of the laboratory. USA: Wadsworth.
Dubois, Hans F.W. 2002. Harmonization of the European vaccination policy and the role TQM and reengineering could play. Quality Management in Health Care 10(2): 47–57.
J. A. Estes, M. T. Tinker, T. M. Williams, D. F. Doak "Killer Whale Predation on Sea Otters Linking Oceanic and Nearshore Ecosystems", Science, October 16, 1998: Vol. 282. no. 5388, pp. 473 – 476
Luiz Carlos Bresser-Pereira, José María Maravall, and Adam Przeworski, 1993. Economic reforms in new democracies. Cambridge: Cambridge University Press. .
External links
"Program Development by Stepwise Refinement", Communications of the ACM, Vol. 14, No. 4, April (1971)
Integrated Parallel Bottom-up and Top-down Approach. In Proceedings of the International Emergency Management Society's Fifth Annual Conference (TIEMS 98), May 19–22, Washington DC, USA (1998).
Changing Your Mind: On the Contributions of Top-Down and Bottom-Up Guidance in Visual Search for Feature Singletons, Journal of Experimental Psychology: Human Perception and Performance, Vol. 29, No. 2, 483–502, 2003.
K. Eric Drexler and Christine Peterson, Nanotechnology and Enabling Technologies, Foresight Briefing No. 2, 1989.
Empowering sustained patient safety: the benefits of combining top-down and bottom-up approaches
Dichotomies
Information science
Neuropsychology
Software design
Hierarchy | 0.795059 | 0.995917 | 0.791813 |
Metabolite | In biochemistry, a metabolite is an intermediate or end product of metabolism.
The term is usually used for small molecules. Metabolites have various functions, including fuel, structure, signaling, stimulatory and inhibitory effects on enzymes, catalytic activity of their own (usually as a cofactor to an enzyme), defense, and interactions with other organisms (e.g. pigments, odorants, and pheromones).
A primary metabolite is directly involved in normal "growth", development, and reproduction. Ethylene exemplifies a primary metabolite produced large-scale by industrial microbiology.
A secondary metabolite is not directly involved in those processes, but usually has an important ecological function. Examples include antibiotics and pigments such as resins and terpenes etc.
Some antibiotics use primary metabolites as precursors, such as actinomycin, which is created from the primary metabolite tryptophan. Some sugars are metabolites, such as fructose or glucose, which are both present in the metabolic pathways.
Examples of primary metabolites produced by industrial microbiology include:
The metabolome forms a large network of metabolic reactions, where outputs from one enzymatic chemical reaction are inputs to other chemical reactions.
Metabolites from chemical compounds, whether inherent or pharmaceutical, form as part of the natural biochemical process of degrading and eliminating the compounds.
The rate of degradation of a compound is an important determinant of the duration and intensity of its action. Understanding how pharmaceutical compounds are metabolized and the potential side effects of their metabolites is an important part of drug discovery.
See also
Antimetabolite
Intermediary metabolism, also called intermediate metabolism
Metabolic control analysis
Metabolomics, the study of global metabolite profiles in a system (cell, tissue, or organism) under a given set of conditions
Metabolic pathway
Volatile organic compound
References
External links
Metabolism | 0.795946 | 0.994253 | 0.791372 |
AMBER | Assisted Model Building with Energy Refinement (AMBER) is the name of a widely-used molecular dynamics software package originally developed by Peter Kollman's group at the University of California, San Francisco. It has also, subsequently, come to designate a family of force fields for molecular dynamics of biomolecules that can be used both within the AMBER software suite and with many modern computational platforms.
The original version of the AMBER software package was written by Paul Weiner as a post-doc in Peter Kollman's laboratory, and was released in 1981.
Subsequently, U Chandra Singh expanded AMBER as a post-doc in Kollman's laboratory, adding molecular dynamics and free energy capabilities.
The next iteration of AMBER was started around 1987 by a group of developers in (and associated with) the Kollman lab, including David Pearlman, David Case, James Caldwell, William Ross, Thomas Cheatham, Stephen DeBolt, David Ferguson, and George Seibel. This team headed development for more than a decade and introduced a variety of improvements, including significant expansion of the free energy capabilities, accommodation for modern parallel and array processing hardware platforms (Cray, Star, etc.), restructuring of the code and revision control for greater maintainability, PME Ewald summations, tools for NMR refinement, and many others.
Currently, AMBER is maintained by an active collaboration between David Case at Rutgers University, Tom Cheatham at the University of Utah, Adrian Roitberg at University of Florida, Ken Merz at Michigan State University, Carlos Simmerling at Stony Brook University, Ray Luo at UC Irvine, and Junmei Wang at University of Pittsburgh.
Force field
The term AMBER force field generally refers to the functional form used by the family of AMBER force fields. This form includes several parameters; each member of the family of AMBER force fields provides values for these parameters and has its own name.
Functional form
The functional form of the AMBER force field is
Despite the term force field, this equation defines the potential energy of the system; the force is the derivative of this potential relative to position.
The meanings of right hand side terms are:
First term (summing over bonds): represents the energy between covalently bonded atoms. This harmonic (ideal spring) force is a good approximation near the equilibrium bond length, but becomes increasingly poor as atoms separate.
Second term (summing over angles): represents the energy due to the geometry of electron orbitals involved in covalent bonding.
Third term (summing over torsions): represents the energy for twisting a bond due to bond order (e.g., double bonds) and neighboring bonds or lone pairs of electrons. One bond may have more than one of these terms, such that the total torsional energy is expressed as a Fourier series.
Fourth term (double summation over and ): represents the non-bonded energy between all atom pairs, which can be decomposed into van der Waals (first term of summation) and electrostatic (second term of summation) energies.
The form of the van der Waals energy is calculated using the equilibrium distance and well depth. The factor of ensures that the equilibrium distance is . The energy is sometimes reformulated in terms of , where , as used e.g. in the implementation of the softcore potentials.
The form of the electrostatic energy used here assumes that the charges due to the protons and electrons in an atom can be represented by a single point charge (or in the case of parameter sets that employ lone pairs, a small number of point charges.)
Parameter sets
To use the AMBER force field, it is necessary to have values for the parameters of the force field (e.g. force constants, equilibrium bond lengths and angles, charges). A fairly large number of these parameter sets exist, and are described in detail in the AMBER software user manual. Each parameter set has a name, and provides parameters for certain types of molecules.
Peptide, protein, and nucleic acid parameters are provided by parameter sets with names starting with "ff" and containing a two digit year number, for instance "ff99". As of 2018 the primary protein model used by the AMBER suit is the ff14SB force field.
General AMBER force field (GAFF) provides parameters for small organic molecules to facilitate simulations of drugs and small molecule ligands in conjunction with biomolecules.
The GLYCAM force fields have been developed by Rob Woods for simulating carbohydrates.
The primary force field used in the AMBER suit for lipids is lipid14.
Software
The AMBER software suite provides a set of programs to apply the AMBER forcefields to simulations of biomolecules. It is written in the programming languages Fortran 90 and C, with support for most major Unix-like operating systems and compilers. Development is conducted by a loose association of mostly academic labs. New versions are released usually in the spring of even numbered years; AMBER 10 was released in April 2008. The software is available under a site license agreement, which includes full source, currently priced at US$500 for non-commercial and US$20,000 for commercial organizations.
Programs
LEaP prepares input files for the simulation programs.
Antechamber automates the process of parameterizing small organic molecules using GAFF.
Simulated Annealing with NMR-Derived Energy Restraints (SANDER) is the central simulation program and provides facilities for energy minimizing and molecular dynamics with a wide variety of options.
pmemd is a somewhat more feature-limited reimplementation of SANDER by Bob Duke. It was designed for parallel computing, and performs significantly better than SANDER when running on more than 8–16 processors.
pmemd.cuda runs simulations on machines with graphics processing units (GPUs).
pmemd.amoeba handles the extra parameters in the polarizable AMOEBA force field.
nmode calculates normal modes.
ptraj numerically analyzes simulation results. AMBER includes no visualizing abilities, which is commonly performed with Visual Molecular Dynamics (VMD). Ptraj is now unsupported as of AmberTools 13.
cpptraj is a rewritten version of ptraj made in C++ to give faster analysis of simulation results. Several actions have been made parallelizable with OpenMP and MPI.
MM-PBSA allows implicit solvent calculations on snap shots from molecular dynamics simulations.
NAB is a built-in nucleic acid building environment made to aid in the process of manipulating proteins and nucleic acids where an atomic level of description will aid computing.
See also
References
Related reading
1.
External links
AMBER mailing list archive
Amber on the German HPC-C5 Cluster-Systems
Fortran software
Molecular dynamics software
Force fields (chemistry) | 0.791771 | 0.999471 | 0.791352 |
Biological process | Biological processes are those processes that are necessary for an organism to live and that shape its capacities for interacting with its environment. Biological processes are made of many chemical reactions or other events that are involved in the persistence and transformation of life forms.
Regulation of biological processes occurs when any process is modulated in its frequency, rate or extent. Biological processes are regulated by many means; examples include the control of gene expression, protein modification or interaction with a protein or substrate molecule.
Homeostasis: regulation of the internal environment to maintain a constant state; for example, sweating to reduce temperature
Organization: being structurally composed of one or more cells – the basic units of life
Metabolism: transformation of energy by converting chemicals and energy into cellular components (anabolism) and decomposing organic matter (catabolism). Living things require energy to maintain internal organization (homeostasis) and to produce the other phenomena associated with life.
Growth: maintenance of a higher rate of anabolism than catabolism. A growing organism increases in size in all of its parts, rather than simply accumulating matter.
Response to stimuli: a response can take many forms, from the contraction of a unicellular organism to external chemicals, to complex reactions involving all the senses of multicellular organisms. A response is often expressed by motion; for example, the leaves of a plant turning toward the sun (phototropism), and chemotaxis.
Interaction between organisms. the processes by which an organism has an observable effect on another organism of the same or different species.
Also: cellular differentiation, fermentation, fertilisation, germination, tropism, hybridisation, metamorphosis, morphogenesis, photosynthesis, transpiration.
See also
Chemical process
Life
Organic reaction
References
Biological concepts | 0.796237 | 0.993705 | 0.791225 |
Biosignature | A biosignature (sometimes called chemical fossil or molecular fossil) is any substance – such as an element, isotope, molecule, or phenomenon – that provides scientific evidence of past or present life on a planet. Measurable attributes of life include its physical or chemical structures, its use of free energy, and the production of biomass and wastes.
The field of astrobiology uses biosignatures as evidence for the search for past or present extraterrestrial life.
Types
Biosignatures can be grouped into ten broad categories:
Isotope patterns: Isotopic evidence or patterns that require biological processes.
Chemistry: Chemical features that require biological activity.
Organic matter: Organics formed by biological processes.
Minerals: Minerals or biomineral-phases whose composition and/or morphology indicate biological activity (e.g., biomagnetite).
Microscopic structures and textures: Biologically-formed cements, microtextures, microfossils, and films.
Macroscopic physical structures and textures: Structures that indicate microbial ecosystems, biofilms (e.g., stromatolites), or fossils of larger organisms.
Temporal variability: Variations in time of atmospheric gases, reflectivity, or macroscopic appearance that indicates life's presence.
Surface reflectance features: Large-scale reflectance features due to biological pigments.
Atmospheric gases: Gases formed by metabolic processes, which may be present on a planet-wide scale.
Technosignatures: Signatures that indicate a technologically advanced civilization.
Viability
Determining whether an observed feature is a true biosignature is complex. There are three criteria that a potential biosignature must meet to be considered viable for further research: Reliability, survivability, and detectability.
Reliability
A biosignature must be able to dominate over all other processes that may produce similar physical, spectral, and chemical features. When investigating a potential biosignature, scientists must carefully consider all other possible origins of the biosignature in question. Many forms of life are known to mimic geochemical reactions. One of the theories on the origin of life involves molecules developing the ability to catalyse geochemical reactions to exploit the energy being released by them. These are some of the earliest known metabolisms (see methanogenesis). In such case, scientists might search for a disequilibrium in the geochemical cycle, which would point to a reaction happening more or less often than it should. A disequilibrium such as this could be interpreted as an indication of life.
Survivability
A biosignature must be able to last for long enough so that a probe, telescope, or human can be able to detect it. A consequence of a biological organism's use of metabolic reactions for energy is the production of metabolic waste. In addition, the structure of an organism can be preserved as a fossil and we know that some fossils on Earth are as old as 3.5 billion years. These byproducts can make excellent biosignatures since they provide direct evidence for life. However, in order to be a viable biosignature, a byproduct must subsequently remain intact so that scientists may discover it.
Detectability
A biosignature must be detectable with the most latest technology to be relevant in scientific investigation. This seems to be an obvious statement, however, there are many scenarios in which life may be present on a planet yet remain undetectable because of human-caused limitations.
False positives
Every possible biosignature is associated with its own set of unique false positive mechanisms or non-biological processes that can mimic the detectable feature of a biosignature. An important example is using oxygen as a biosignature. On Earth, the majority of life is centred around oxygen. It is a byproduct of photosynthesis and is subsequently used by other life forms to breathe. Oxygen is also readily detectable in spectra, with multiple bands across a relatively wide wavelength range, therefore, it makes a very good biosignature. However, finding oxygen alone in a planet's atmosphere is not enough to confirm a biosignature because of the false-positive mechanisms associated with it. One possibility is that oxygen can build up abiotically via photolysis if there is a low inventory of non-condensable gasses or if the planet loses a lot of water. Finding and distinguishing a biosignature from its potential false-positive mechanisms is one of the most complicated parts of testing for viability because it relies on human ingenuity to break an abiotic-biological degeneracy, if nature allows.
False negatives
Opposite to false positives, false negative biosignatures arise in a scenario where life may be present on another planet, but some processes on that planet make potential biosignatures undetectable. This is an ongoing problem and area of research in preparation for future telescopes that will be capable of observing exoplanetary atmospheres.
Human limitations
There are many ways in which humans may limit the viability of a potential biosignature. The resolution of a telescope becomes important when vetting certain false-positive mechanisms, and many current telescopes do not have the capabilities to observe at the resolution needed to investigate some of these. In addition, probes and telescopes are worked on by huge collaborations of scientists with varying interests. As a result, new probes and telescopes carry a variety of instruments that are a compromise to everyone's unique inputs. For a different type of scientist to detect something unrelated to biosignatures, a sacrifice may have to be made in the capability of an instrument to search for biosignatures.
General examples
Geomicrobiology
The ancient record on Earth provides an opportunity to see what geochemical signatures are produced by microbial life and how these signatures are preserved over geologic time. Some related disciplines such as geochemistry, geobiology, and geomicrobiology often use biosignatures to determine if living organisms are or were present in a sample. These possible biosignatures include: (a) microfossils and stromatolites; (b) molecular structures (biomarkers) and isotopic compositions of carbon, nitrogen and hydrogen in organic matter; (c) multiple sulfur and oxygen isotope ratios of minerals; and (d) abundance relationships and isotopic compositions of redox-sensitive metals (e.g., Fe, Mo, Cr, and rare earth elements).
For example, the particular fatty acids measured in a sample can indicate which types of bacteria and archaea live in that environment. Another example is the long-chain fatty alcohols with more than 23 atoms that are produced by planktonic bacteria. When used in this sense, geochemists often prefer the term biomarker. Another example is the presence of straight-chain lipids in the form of alkanes, alcohols, and fatty acids with 20–36 carbon atoms in soils or sediments. Peat deposits are an indication of originating from the epicuticular wax of higher plants.
Life processes may produce a range of biosignatures such as nucleic acids, lipids, proteins, amino acids, kerogen-like material and various morphological features that are detectable in rocks and sediments. Microbes often interact with geochemical processes, leaving features in the rock record indicative of biosignatures. For example, bacterial micrometer-sized pores in carbonate rocks resemble inclusions under transmitted light, but have distinct sizes, shapes, and patterns (swirling or dendritic) and are distributed differently from common fluid inclusions. A potential biosignature is a phenomenon that may have been produced by life, but for which alternate abiotic origins may also be possible.
Morphology
Another possible biosignature might be morphology since the shape and size of certain objects may potentially indicate the presence of past or present life. For example, microscopic magnetite crystals in the Martian meteorite ALH84001 are one of the longest-debated of several potential biosignatures in that specimen. The possible biomineral studied in the Martian ALH84001 meteorite includes putative microbial fossils, tiny rock-like structures whose shape was a potential biosignature because it resembled known bacteria. Most scientists ultimately concluded that these were far too small to be fossilized cells. A consensus that has emerged from these discussions, and is now seen as a critical requirement, is the demand for further lines of evidence in addition to any morphological data that supports such extraordinary claims. Currently, the scientific consensus is that "morphology alone cannot be used unambiguously as a tool for primitive life detection". Interpretation of morphology is notoriously subjective, and its use alone has led to numerous errors of interpretation.
Chemistry
No single compound will prove life once existed. Rather, it will be distinctive patterns present in any organic compounds showing a process of selection. For example, membrane lipids left behind by degraded cells will be concentrated, have a limited size range, and comprise an even number of carbons. Similarly, life only uses left-handed amino acids. Biosignatures need not be chemical, however, and can also be suggested by a distinctive magnetic biosignature.
Chemical biosignatures include any suite of complex organic compounds composed of carbon, hydrogen, and other elements or heteroatoms such as oxygen, nitrogen, and sulfur, which are found in crude oils, bitumen, petroleum source rock and eventually show simplification in molecular structure from the parent organic molecules found in all living organisms. They are complex carbon-based molecules derived from formerly living organisms. Each biomarker is quite distinctive when compared to its counterparts, as the time required for organic matter to convert to crude oil is characteristic. Most biomarkers also usually have high molecular mass.
Some examples of biomarkers found in petroleum are pristane, triterpanes, steranes, phytane and porphyrin. Such petroleum biomarkers are produced via chemical synthesis using biochemical compounds as their main constituents. For instance, triterpenes are derived from biochemical compounds found on land angiosperm plants. The abundance of petroleum biomarkers in small amounts in its reservoir or source rock make it necessary to use sensitive and differential approaches to analyze the presence of those compounds. The techniques typically used include gas chromatography and mass spectrometry.
Petroleum biomarkers are highly important in petroleum inspection as they help indicate the depositional territories and determine the geological properties of oils. For instance, they provide more details concerning their maturity and the source material. In addition to that they can also be good parameters of age, hence they are technically referred to as "chemical fossils". The ratio of pristane to phytane (pr:ph) is the geochemical factor that allows petroleum biomarkers to be successful indicators of their depositional environments.
Geologists and geochemists use biomarker traces found in crude oils and their related source rock to unravel the stratigraphic origin and migration patterns of presently existing petroleum deposits. The dispersion of biomarker molecules is also quite distinctive for each type of oil and its source; hence, they display unique fingerprints. Another factor that makes petroleum biomarkers more preferable than their counterparts is that they have a high tolerance to environmental weathering and corrosion. Such biomarkers are very advantageous and often used in the detection of oil spillage in the major waterways. The same biomarkers can also be used to identify contamination in lubricant oils. However, biomarker analysis of untreated rock cuttings can be expected to produce misleading results. This is due to potential hydrocarbon contamination and biodegradation in the rock samples.
Atmospheric
The atmospheric properties of exoplanets are of particular importance, as atmospheres provide the most likely observables for the near future, including habitability indicators and biosignatures. Over billions of years, the processes of life on a planet would result in a mixture of chemicals unlike anything that could form in an ordinary chemical equilibrium. For example, large amounts of oxygen and small amounts of methane are generated by life on Earth.
An exoplanet's color—or reflectance spectrum—can also be used as a biosignature due to the effect of pigments that are uniquely biologic in origin such as the pigments of phototrophic and photosynthetic life forms. Scientists use the Earth as an example of this when looked at from far away (see Pale Blue Dot) as a comparison to worlds observed outside of our solar system. Ultraviolet radiation on life forms could also induce biofluorescence in visible wavelengths that may be detected by the new generation of space observatories under development.
Some scientists have reported methods of detecting hydrogen and methane in extraterrestrial atmospheres. Habitability indicators and biosignatures must be interpreted within a planetary and environmental context. For example, the presence of oxygen and methane together could indicate the kind of extreme thermochemical disequilibrium generated by life. Two of the top 14,000 proposed atmospheric biosignatures are dimethyl sulfide and chloromethane. An alternative biosignature is the combination of methane and carbon dioxide.
The detection of phosphine in the atmosphere of Venus is being investigated as a possible biosignature.
Atmospheric disequilibrium
A disequilibrium in the abundance of gas species in an atmosphere can be interpreted as a biosignature. Life has greatly altered the atmosphere on Earth in a way that would be unlikely for any other processes to replicate. Therefore, a departure from equilibrium is evidence for a biosignature. For example, the abundance of methane in the Earth's atmosphere is orders of magnitude above the equilibrium value due to the constant methane flux that life on the surface emits. Depending on the host star, a disequilibrium in the methane abundance on another planet may indicate a biosignature.
Agnostic biosignatures
Because the only form of known life is that on Earth, the search for biosignatures is heavily influenced by the products that life produces on Earth. However, life that is different from life on Earth may still produce biosignatures that are detectable by humans, even though nothing is known about their specific biology. This form of biosignature is called an "agnostic biosignature" because it is independent of the form of life that produces it. It is widely agreed that all life–no matter how different it is from life on Earth–needs a source of energy to thrive. This must involve some sort of chemical disequilibrium, which can be exploited for metabolism. Geological processes are independent of life, and if scientists can constrain the geology well enough on another planet, then they know what the particular geologic equilibrium for that planet should be. A deviation from geological equilibrium can be interpreted as an atmospheric disequilibrium and agnostic biosignature.
Antibiosignatures
In the same way that detecting a biosignature would be a significant discovery about a planet, finding evidence that life is not present can also be an important discovery about a planet. Life relies on redox imbalances to metabolize the resources available into energy. The evidence that nothing on an earth is taking advantage of the "free lunch" available due to an observed redox imbalance is called antibiosignatures.
Polyelectrolytes
The Polyelectrolyte theory of the gene is a proposed generic biosignature. In 2002, Steven A. Benner and Daniel Hutter proposed that for a linear genetic biopolymer dissolved in water, such as DNA, to undergo Darwinian evolution anywhere in the universe, it must be a polyelectrolyte, a polymer containing repeating ionic charges. Benner and others proposed methods for concentrating and analyzing these polyelectrolyte genetic biopolymers on Mars, Enceladus, and Europa.
Specific examples
Methane on Mars
The presence of methane in the atmosphere of Mars is an area of ongoing research and a highly contentious subject. Because of its tendency to be destroyed in the atmosphere by photochemistry, the presence of excess methane on a planet can indicate that there must be an active source. With life being the strongest source of methane on Earth, observing a disequilibrium in the methane abundance on another planet could be a viable biosignature.
Since 2004, there have been several detections of methane in the Mars atmosphere by a variety of instruments onboard orbiters and ground-based landers on the Martian surface as well as Earth-based telescopes. These missions reported values anywhere between a 'background level' ranging between 0.24 and 0.65 parts per billion by volume (p.p.b.v.) to as much as 45 ± 10 p.p.b.v.
However, recent measurements using the ACS and NOMAD instruments on board the ESA-Roscosmos ExoMars Trace Gas Orbiter have failed to detect any methane over a range of latitudes and longitudes on both Martian hemispheres. These highly sensitive instruments were able to put an upper bound on the overall methane abundance at 0.05 p.p.b.v. This nondetection is a major contradiction to what was previously observed with less sensitive instruments and will remain a strong argument in the ongoing debate over the presence of methane in the Martian atmosphere.
Furthermore, current photochemical models cannot explain the presence of methane in the atmosphere of Mars and its reported rapid variations in space and time. Neither its fast appearance nor disappearance can be explained yet. To rule out a biogenic origin for the methane, a future probe or lander hosting a mass spectrometer will be needed, as the isotopic proportions of carbon-12 to carbon-14 in methane could distinguish between a biogenic and non-biogenic origin, similarly to the use of the δ13C standard for recognizing biogenic methane on Earth.
Martian atmosphere
The Martian atmosphere contains high abundances of photochemically produced CO and H2, which are reducing molecules. Mars' atmosphere is otherwise mostly oxidizing, leading to a source of untapped energy that life could exploit if it used by a metabolism compatible with one or both of these reducing molecules. Because these molecules can be observed, scientists use this as evidence for an antibiosignature. Scientists have used this concept as an argument against life on Mars.
Missions inside the Solar System
Astrobiological exploration is founded upon the premise that biosignatures encountered in space will be recognizable as extraterrestrial life. The usefulness of a biosignature is determined not only by the probability of life creating it but also by the improbability of non-biological (abiotic) processes producing it. Concluding that evidence of an extraterrestrial life form (past or present) has been discovered requires proving that a possible biosignature was produced by the activities or remains of life. As with most scientific discoveries, discovery of a biosignature will require evidence building up until no other explanation exists.
Possible examples of a biosignature include complex organic molecules or structures whose formation is virtually unachievable in the absence of life:
Cellular and extracellular morphologies
Biomolecules in rocks
Bio-organic molecular structures
Chirality
Biogenic minerals
Biogenic isotope patterns in minerals and organic compounds
Atmospheric gases
Photosynthetic pigments
The Viking missions to Mars
The Viking missions to Mars in the 1970s conducted the first experiments which were explicitly designed to look for biosignatures on another planet. Each of the two Viking landers carried three life-detection experiments which looked for signs of metabolism; however, the results were declared inconclusive.
Mars Science Laboratory
The Curiosity rover from the Mars Science Laboratory mission, with its Curiosity rover is currently assessing the potential past and present habitability of the Martian environment and is attempting to detect biosignatures on the surface of Mars. Considering the MSL instrument payload package, the following classes of biosignatures are within the MSL detection window: organism morphologies (cells, body fossils, casts), biofabrics (including microbial mats), diagnostic organic molecules, isotopic signatures, evidence of biomineralization and bioalteration, spatial patterns in chemistry, and biogenic gases. The Curiosity rover targets outcrops to maximize the probability of detecting 'fossilized' organic matter preserved in sedimentary deposits.
ExoMars Orbiter
The 2016 ExoMars Trace Gas Orbiter (TGO) is a Mars telecommunications orbiter and atmospheric gas analyzer mission. It delivered the Schiaparelli EDM lander and then began to settle into its science orbit to map the sources of methane on Mars and other gases, and in doing so, will help select the landing site for the Rosalind Franklin rover to be launched in 2022. The primary objective of the Rosalind Franklin rover mission is the search for biosignatures on the surface and subsurface by using a drill able to collect samples down to a depth of , away from the destructive radiation that bathes the surface.
Mars 2020 Rover
The Mars 2020 rover, which launched in 2020, is intended to investigate an astrobiologically relevant ancient environment on Mars, investigate its surface geological processes and history, including the assessment of its past habitability, the possibility of past life on Mars, and potential for preservation of biosignatures within accessible geological materials. In addition, it will cache the most interesting samples for possible future transport to Earth.
Titan Dragonfly
NASA's Dragonfly lander/aircraft concept is proposed to launch in 2025 and would seek evidence of biosignatures on the organic-rich surface and atmosphere of Titan, as well as study its possible prebiotic primordial soup. Titan is the largest moon of Saturn and is widely believed to have a large subsurface ocean consisting of a salty brine. In addition, scientists believe that Titan may have the conditions necessary to promote prebiotic chemistry, making it a prime candidate for biosignature discovery.
Europa Clipper
NASA's Europa Clipper probe is designed as a flyby mission to Jupiter's smallest Galilean moon, Europa. The mission launched in October 2024 and is set to reach Europa in April 2030, where it will investigate the potential for habitability on Europa. Europa is one of the best candidates for biosignature discovery in the Solar System because of the scientific consensus that it retains a subsurface ocean, with two to three times the volume of water on Earth. Evidence for this subsurface ocean includes:
Voyager 1 (1979): The first close-up photos of Europa are taken. Scientists propose that a subsurface ocean could cause the tectonic-like marks on the surface.
Galileo (1997): The magnetometer aboard this probe detected a subtle change in the magnetic field near Europa. This was later interpreted as a disruption in the expected magnetic field due to the current induction in a conducting layer on Europa. The composition of this conducting layer is consistent with a salty subsurface ocean.
Hubble Space Telescope (2012): An image was taken of Europa which showed evidence for a plume of water vapor coming off the surface.
The Europa Clipper probe includes instruments to help confirm the existence and composition of a subsurface ocean and thick icy layer. In addition, the instruments will be used to map and study surface features that may indicate tectonic activity due to a subsurface ocean.
Enceladus
Although there are no set plans to search for biosignatures on Saturn's sixth-largest moon, Enceladus, the prospects of biosignature discovery there are exciting enough to warrant several mission concepts that may be funded in the future. Similar to Jupiter's moon Europa, there is much evidence for a subsurface ocean to also exist on Enceladus. Plumes of water vapor were first observed in 2005 by the Cassini mission and were later determined to contain salt as well as organic compounds. In 2014, more evidence was presented using gravimetric measurements on Enceladus to conclude that there is in fact a large reservoir of water underneath an icy surface. Mission design concepts include:
Enceladus Life Finder (ELF)
Enceladus Life Signatures and Habitability
Enceladus Organic Analyzer
Enceladus Explorer (En-Ex)
Explorer of Enceladus and Titan (E2T)
Journey to Enceladus and Titan (JET)
Life Investigation For Enceladus (LIFE)
Testing the Habitability of Enceladus's Ocean (THEO)
All of these concept missions have similar science goals: To assess the habitability of Enceladus and search for biosignatures, in line with the strategic map for exploring the ocean-world Enceladus.
Searching outside of the Solar System
At 4.2 light-years (1.3 parsecs, 40 trillion km, or 25 trillion miles) away from Earth, the closest potentially habitable exoplanet is Proxima Centauri b, which was discovered in 2016. This means it would take more than 18,100 years to get there if a vessel could consistently travel as fast as the Juno spacecraft (250,000 kilometers per hour or 150,000 miles per hour). It is currently not feasible to send humans or even probes to search for biosignatures outside of the Solar System. The only way to search for biosignatures outside of the Solar System is by observing exoplanets with telescopes.
There have been no plausible or confirmed biosignature detections outside of the Solar System. Despite this, it is a rapidly growing field of research due to the prospects of the next generation of telescopes. The James Webb Space Telescope, which launched in December 2021, will be a promising next step in the search for biosignatures. Although its wavelength range and resolution will not be compatible with some of the more important atmospheric biosignature gas bands like oxygen, it will still be able to detect some evidence for oxygen false positive mechanisms.
The new generation of ground-based 30-meter class telescopes (Thirty Meter Telescope and Extremely Large Telescope) will have the ability to take high-resolution spectra of exoplanet atmospheres at a variety of wavelengths. These telescopes will be capable of distinguishing some of the more difficult false positive mechanisms such as the abiotic buildup of oxygen via photolysis. In addition, their large collecting area will enable high angular resolution, making direct imaging studies more feasible.
See also
Bioindicator
MERMOZ (remote detection of lifeforms)
Taphonomy
Technosignature
References
Astrobiology
Astrochemistry
Bioindicators
Biology terminology
Search for extraterrestrial intelligence
Petroleum geology | 0.809443 | 0.977392 | 0.791143 |
Systematics | Systematics is the study of the diversification of living forms, both past and present, and the relationships among living things through time. Relationships are visualized as evolutionary trees (synonyms: phylogenetic trees, phylogenies). Phylogenies have two components: branching order (showing group relationships, graphically represented in cladograms) and branch length (showing amount of evolution). Phylogenetic trees of species and higher taxa are used to study the evolution of traits (e.g., anatomical or molecular characteristics) and the distribution of organisms (biogeography). Systematics, in other words, is used to understand the evolutionary history of life on Earth.
The word systematics is derived from the Latin word of Ancient Greek origin systema, which means systematic arrangement of organisms. Carl Linnaeus used 'Systema Naturae' as the title of his book.
Branches and applications
In the study of biological systematics, researchers use the different branches to further understand the relationships between differing organisms. These branches are used to determine the applications and uses for modern day systematics.
Biological systematics classifies species by using three specific branches. Numerical systematics, or biometry, uses biological statistics to identify and classify animals. Biochemical systematics classifies and identifies animals based on the analysis of the material that makes up the living part of a cell—such as the nucleus, organelles, and cytoplasm. Experimental systematics identifies and classifies animals based on the evolutionary units that comprise a species, as well as their importance in evolution itself. Factors such as mutations, genetic divergence, and hybridization all are considered evolutionary units.
With the specific branches, researchers are able to determine the applications and uses for modern-day systematics. These applications include:
Studying the diversity of organisms and the differentiation between extinct and living creatures. Biologists study the well-understood relationships by making many different diagrams and "trees" (cladograms, phylogenetic trees, phylogenies, etc.).
Including the scientific names of organisms, species descriptions and overviews, taxonomic orders, and classifications of evolutionary and organism histories.
Explaining the biodiversity of the planet and its organisms. The systematic study is that of conservation.
Manipulating and controlling the natural world. This includes the practice of 'biological control', the intentional introduction of natural predators and disease.
Definition and relation with taxonomy
John Lindley provided an early definition of systematics in 1830, although he wrote of "systematic botany" rather than using the term "systematics".
In 1970 Michener et al. defined "systematic biology" and "taxonomy" (terms that are often confused and used interchangeably) in relationship to one another as follows:
Systematic biology (hereafter called simply systematics) is the field that (a) provides scientific names for organisms, (b) describes them, (c) preserves collections of them, (d) provides classifications for the organisms, keys for their identification, and data on their distributions, (e) investigates their evolutionary histories, and (f) considers their environmental adaptations. This is a field with a long history that in recent years has experienced a notable renaissance, principally with respect to theoretical content. Part of the theoretical material has to do with evolutionary areas (topics e and f above), the rest relates especially to the problem of classification. Taxonomy is that part of Systematics concerned with topics (a) to (d) above.
The term "taxonomy" was coined by Augustin Pyramus de Candolle while the term "systematic" was coined by Carl Linnaeus the father of taxonomy.
Taxonomy, systematic biology, systematics, biosystematics, scientific classification, biological classification, phylogenetics: At various times in history, all these words have had overlapping, related meanings. However, in modern usage, they can all be considered synonyms of each other.
For example, Webster's 9th New Collegiate Dictionary of 1987 treats "classification", "taxonomy", and "systematics" as synonyms. According to this work, the terms originated in 1790, c. 1828, and in 1888 respectively. Some claim systematics alone deals specifically with relationships through time, and that it can be synonymous with phylogenetics, broadly dealing with the inferred hierarchy of organisms. This means it would be a subset of taxonomy as it is sometimes regarded, but the inverse is claimed by others.
Europeans tend to use the terms "systematics" and "biosystematics" for the study of biodiversity as a whole, whereas North Americans tend to use "taxonomy" more frequently. However, taxonomy, and in particular alpha taxonomy, is more specifically the identification, description, and naming (i.e. nomenclature) of organisms,
while "classification" focuses on placing organisms within hierarchical groups that show their relationships to other organisms. All of these biological disciplines can deal with both extinct and extant organisms.
Systematics uses taxonomy as a primary tool in understanding, as nothing about an organism's relationships with other living things can be understood without it first being properly studied and described in sufficient detail to identify and classify it correctly. Scientific classifications are aids in recording and reporting information to other scientists and to laymen. The systematist, a scientist who specializes in systematics, must, therefore, be able to use existing classification systems, or at least know them well enough to skilfully justify not using them.
Phenetics was an attempt to determine the relationships of organisms through a measure of overall similarity, making no distinction between plesiomorphies (shared ancestral traits) and apomorphies (derived traits). From the late-20th century onwards, it was superseded by cladistics, which rejects plesiomorphies in attempting to resolve the phylogeny of Earth's various organisms through time. systematists generally make extensive use of molecular biology and of computer programs to study organisms.
Taxonomic characters
Taxonomic characters are the taxonomic attributes that can be used to provide the evidence from which relationships (the phylogeny) between taxa are inferred. Kinds of taxonomic characters include:
Morphological characters
General external morphology
Special structures (e.g. genitalia)
Internal morphology (anatomy)
Embryology
Karyology and other cytological factors
Physiological characters
Metabolic factors
Body secretions
Genic sterility factors
Molecular characters
Immunological distance
Electrophoretic differences
Amino acid sequences of proteins
DNA hybridization
DNA and RNA sequences
Restriction endonuclease analyses
Other molecular differences
Behavioral characters
Courtship and other ethological isolating mechanisms
Other behavior patterns
Ecological characters
Habit and habitats
Food
Seasonal variations
Parasites and hosts
Geographic characters
General biogeographic distribution patterns
Sympatric-allopatric relationship of populations
See also
Cladistics – a methodology in systematics
Evolutionary systematics – a school of systematics
Global biodiversity
Phenetics – a methodology in systematics that does not infer phylogeny
Phylogeny – the historical relationships between lineages of organism
16S ribosomal RNA – an intensively studied nucleic acid that has been useful in phylogenetics
Phylogenetic comparative methods – use of evolutionary trees in other studies, such as biodiversity, comparative biology. adaptation, or evolutionary mechanisms
References
Notes
Further reading
Brower, Andrew V. Z. and Randall T. Schuh. 2021. Biological Systematics: Principles and Applications, 3rd edn.
Simpson, Michael G. 2005. Plant Systematics.
Wiley, Edward O. and Bruce S. Lieberman. 2011. "Phylogenetics: Theory and Practice of Phylogenetic Systematics, 2nd edn."
External links
Society of Australian Systematic Biologists
Society of Systematic Biologists
The Willi Hennig Society
Evolutionary biology
Biological classification | 0.796874 | 0.992589 | 0.790968 |
Analytical technique | Analytical technique is a method used to determine a chemical or physical property of a chemical substance, chemical element, or mixture. There is a wide variety of techniques used for analysis, from simple weighing to advanced techniques using highly specialized instrumentation.
Classical methods of analysis
Classical analysis methods involve basic analytical methods widely used in laboratories. Gravimetric analysis measures the weight of the sample. Titrimetry is a family of techniques used to determine the concentration of an analyte.
Spectrochemical analysis
Spectrometer can determine chemical composition through its measure of spectrums. The common spectrometer used in analytical chemistry is Mass spectrometry. In a mass spectrometer, a small amount of sample is ionized and converted to gaseous ions, where they are separated and analyzed according to their mass-to-charge ratios.
NMR Spectroscopy involves exciting a NMR-active sample and then measuring the effects of this magnetic excitation. From this, the bonds present in a sample can be determined.
Electroanalytical analysis
Electroanalytical methods utilize the potential or current of a electrochemical cell. The three main sections of this type of analysis are potentiometry, coulometry and voltammetry. Potentiometry measures the cell's potential, coulometry measures the cell's current, and voltammetry measures the change in current when cell potential changes.
Chromatography
Chromatography separates the analyte from the rest of the sample so that it may be measured without interference from other compounds. There are different types of chromatography that differ from the media they use to separate the analyte and the sample. In Thin-layer chromatography, the analyte mixture moves up and separates along the coated sheet under the volatile mobile phase. In Gas chromatography, gas separates the volatile analytes. A common method for chromatography using liquid as a mobile phase is High-performance liquid chromatography.
See also
List of chemical analysis methods
List of materials analysis methods
Microanalysis
Ion beam analysis
Rutherford backscattering spectroscopy
Nuclear reaction analysis
Clinical chemistry
Radioanalytical chemistry
Calorimeter
References
Analytical chemistry | 0.807117 | 0.979318 | 0.790424 |
In vitro | In vitro (meaning in glass, or in the glass) studies are performed with microorganisms, cells, or biological molecules outside their normal biological context. Colloquially called "test-tube experiments", these studies in biology and its subdisciplines are traditionally done in labware such as test tubes, flasks, Petri dishes, and microtiter plates. Studies conducted using components of an organism that have been isolated from their usual biological surroundings permit a more detailed or more convenient analysis than can be done with whole organisms; however, results obtained from in vitro experiments may not fully or accurately predict the effects on a whole organism. In contrast to in vitro experiments, in vivo studies are those conducted in living organisms, including humans, known as clinical trials, and whole plants.
Definition
In vitro (Latin for "in glass"; often not italicized in English usage) studies are conducted using components of an organism that have been isolated from their usual biological surroundings, such as microorganisms, cells, or biological molecules. For example, microorganisms or cells can be studied in artificial culture media, and proteins can be examined in solutions. Colloquially called "test-tube experiments", these studies in biology, medicine, and their subdisciplines are traditionally done in test tubes, flasks, Petri dishes, etc. They now involve the full range of techniques used in molecular biology, such as the omics.
In contrast, studies conducted in living beings (microorganisms, animals, humans, or whole plants) are called in vivo.
Examples
Examples of in vitro studies include: the isolation, growth and identification of cells derived from multicellular organisms (in cell or tissue culture); subcellular components (e.g. mitochondria or ribosomes); cellular or subcellular extracts (e.g. wheat germ or reticulocyte extracts); purified molecules (such as proteins, DNA, or RNA); and the commercial production of antibiotics and other pharmaceutical products. Viruses, which only replicate in living cells, are studied in the laboratory in cell or tissue culture, and many animal virologists refer to such work as being in vitro to distinguish it from in vivo work in whole animals.
Polymerase chain reaction is a method for selective replication of specific DNA and RNA sequences in the test tube.
Protein purification involves the isolation of a specific protein of interest from a complex mixture of proteins, often obtained from homogenized cells or tissues.
In vitro fertilization is used to allow spermatozoa to fertilize eggs in a culture dish before implanting the resulting embryo or embryos into the uterus of the prospective mother.
In vitro diagnostics refers to a wide range of medical and veterinary laboratory tests that are used to diagnose diseases and monitor the clinical status of patients using samples of blood, cells, or other tissues obtained from a patient.
In vitro testing has been used to characterize specific adsorption, distribution, metabolism, and excretion processes of drugs or general chemicals inside a living organism; for example, Caco-2 cell experiments can be performed to estimate the absorption of compounds through the lining of the gastrointestinal tract; The partitioning of the compounds between organs can be determined to study distribution mechanisms; Suspension or plated cultures of primary hepatocytes or hepatocyte-like cell lines (Hep G2, HepaRG) can be used to study and quantify metabolism of chemicals. These ADME process parameters can then be integrated into so called "physiologically based pharmacokinetic models" or PBPK.
Advantages
In vitro studies permit a species-specific, simpler, more convenient, and more detailed analysis than can be done with the whole organism. Just as studies in whole animals more and more replace human trials, so are in vitro studies replacing studies in whole animals.
Simplicity
Living organisms are extremely complex functional systems that are made up of, at a minimum, many tens of thousands of genes, protein molecules, RNA molecules, small organic compounds, inorganic ions, and complexes in an environment that is spatially organized by membranes, and in the case of multicellular organisms, organ systems. These myriad components interact with each other and with their environment in a way that processes food, removes waste, moves components to the correct location, and is responsive to signalling molecules, other organisms, light, sound, heat, taste, touch, and balance.
This complexity makes it difficult to identify the interactions between individual components and to explore their basic biological functions. In vitro work simplifies the system under study, so the investigator can focus on a small number of components.
For example, the identity of proteins of the immune system (e.g. antibodies), and the mechanism by which they recognize and bind to foreign antigens would remain very obscure if not for the extensive use of in vitro work to isolate the proteins, identify the cells and genes that produce them, study the physical properties of their interaction with antigens, and identify how those interactions lead to cellular signals that activate other components of the immune system.
Species specificity
Another advantage of in vitro methods is that human cells can be studied without "extrapolation" from an experimental animal's cellular response.
Convenience, automation
In vitro methods can be miniaturized and automated, yielding high-throughput screening methods for testing molecules in pharmacology or toxicology.
Disadvantages
The primary disadvantage of in vitro experimental studies is that it may be challenging to extrapolate from the results of in vitro work back to the biology of the intact organism. Investigators doing in vitro work must be careful to avoid over-interpretation of their results, which can lead to erroneous conclusions about organismal and systems biology.
For example, scientists developing a new viral drug to treat an infection with a pathogenic virus (e.g., HIV-1) may find that a candidate drug functions to prevent viral replication in an in vitro setting (typically cell culture). However, before this drug is used in the clinic, it must progress through a series of in vivo trials to determine if it is safe and effective in intact organisms (typically small animals, primates, and humans in succession). Typically, most candidate drugs that are effective in vitro prove to be ineffective in vivo because of issues associated with delivery of the drug to the affected tissues, toxicity towards essential parts of the organism that were not represented in the initial in vitro studies, or other issues.
In vitro test batteries
A method which could help decrease animal testing is the use of in vitro batteries, where several in vitro assays are compiled to cover multiple endpoints. Within developmental neurotoxicity and reproductive toxicity there are hopes for test batteries to become easy screening methods for prioritization for which chemicals to be risk assessed and in which order. Within ecotoxicology in vitro test batteries are already in use for regulatory purpose and for toxicological evaluation of chemicals. In vitro tests can also be combined with in vivo testing to make a in vitro in vivo test battery, for example for pharmaceutical testing.
In vitro to in vivo extrapolation
Results obtained from in vitro experiments cannot usually be transposed, as is, to predict the reaction of an entire organism in vivo. Building a consistent and reliable extrapolation procedure from in vitro results to in vivo is therefore extremely important. Solutions include:
Increasing the complexity of in vitro systems to reproduce tissues and interactions between them (as in "human on chip" systems)
Using mathematical modeling to numerically simulate the behavior of the complex system, where the in vitro data provide model parameter values
These two approaches are not incompatible; better in vitro systems provide better data to mathematical models. However, increasingly sophisticated in vitro experiments collect increasingly numerous, complex, and challenging data to integrate. Mathematical models, such as systems biology models, are much needed here.
Extrapolating in pharmacology
In pharmacology, IVIVE can be used to approximate pharmacokinetics (PK) or pharmacodynamics (PD).
Since the timing and intensity of effects on a given target depend on the concentration time course of candidate drug (parent molecule or metabolites) at that target site, in vivo tissue and organ sensitivities can be completely different or even inverse of those observed on cells cultured and exposed in vitro. That indicates that extrapolating effects observed in vitro needs a quantitative model of in vivo PK. Physiologically based PK (PBPK) models are generally accepted to be central to the extrapolations.
In the case of early effects or those without intercellular communications, the same cellular exposure concentration is assumed to cause the same effects, both qualitatively and quantitatively, in vitro and in vivo. In these conditions, developing a simple PD model of the dose–response relationship observed in vitro, and transposing it without changes to predict in vivo effects is not enough.
See also
Animal testing
Ex vivo
In situ
In utero
In vivo
In silico
In papyro
Animal in vitro cellular and developmental biology
Plant in vitro cellular and developmental biology
In vitro toxicology
In vitro to in vivo extrapolation
Slice preparation
References
External links
Latin biological phrases
Alternatives to animal testing
Animal test conditions
Laboratory techniques | 0.792983 | 0.99665 | 0.790327 |
Phylogenetics | In biology, phylogenetics is the study of the evolutionary history of life using genetics, which is known as phylogenetic inference. It establishes the relationship between organisms with the empirical data and observed heritable traits of DNA sequences, protein amino acid sequences, and morphology. The results are a phylogenetic tree—a diagram setting the hypothetical relationships between organisms and their evolutionary history.
The tips of a phylogenetic tree can be living taxa or fossils, which represent the present time or "end" of an evolutionary lineage, respectively. A phylogenetic diagram can be rooted or unrooted. A rooted tree diagram indicates the hypothetical common ancestor of the tree. An unrooted tree diagram (a network) makes no assumption about the ancestral line, and does not show the origin or "root" of the taxa in question or the direction of inferred evolutionary transformations.
In addition to their use for inferring phylogenetic patterns among taxa, phylogenetic analyses are often employed to represent relationships among genes or individual organisms. Such uses have become central to understanding biodiversity, evolution, ecology, and genomes.
Phylogenetics is a component of systematics that uses similarities and differences of the characteristics of species to interpret their evolutionary relationships and origins. Phylogenetics focuses on whether the characteristics of a species reinforce a phylogenetic inference that it diverged from the most recent common ancestor of a taxonomic group.
In the field of cancer research, phylogenetics can be used to study the clonal evolution of tumors and molecular chronology, predicting and showing how cell populations vary throughout the progression of the disease and during treatment, using whole genome sequencing techniques. The evolutionary processes behind cancer progression are quite different from those in most species and are important to phylogenetic inference; these differences manifest in several areas: the types of aberrations that occur, the rates of mutation, the high heterogeneity (variability) of tumor cell subclones, and the absence of genetic recombination.
Phylogenetics can also aid in drug design and discovery. Phylogenetics allows scientists to organize species and can show which species are likely to have inherited particular traits that are medically useful, such as producing biologically active compounds - those that have effects on the human body. For example, in drug discovery, venom-producing animals are particularly useful. Venoms from these animals produce several important drugs, e.g., ACE inhibitors and Prialt (Ziconotide). To find new venoms, scientists turn to phylogenetics to screen for closely related species that may have the same useful traits. The phylogenetic tree shows which species of fish have an origin of venom, and related fish they may contain the trait. Using this approach in studying venomous fish, biologists are able to identify the fish species that may be venomous. Biologist have used this approach in many species such as snakes and lizards.
In forensic science, phylogenetic tools are useful to assess DNA evidence for court cases. The simple phylogenetic tree of viruses A-E shows the relationships between viruses e.g., all viruses are descendants of Virus A.
HIV forensics uses phylogenetic analysis to track the differences in HIV genes and determine the relatedness of two samples. Phylogenetic analysis has been used in criminal trials to exonerate or hold individuals. HIV forensics does have its limitations, i.e., it cannot be the sole proof of transmission between individuals and phylogenetic analysis which shows transmission relatedness does not indicate direction of transmission.
Taxonomy and classification
Taxonomy is the identification, naming, and classification of organisms. Compared to systemization, classification emphasizes whether a species has characteristics of a taxonomic group. The Linnaean classification system developed in the 1700s by Carolus Linnaeus is the foundation for modern classification methods. Linnaean classification relies on an organism's phenotype or physical characteristics to group and organize species. With the emergence of biochemistry, organism classifications are now usually based on phylogenetic data, and many systematists contend that only monophyletic taxa should be recognized as named groups. The degree to which classification depends on inferred evolutionary history differs depending on the school of taxonomy: phenetics ignores phylogenetic speculation altogether, trying to represent the similarity between organisms instead; cladistics (phylogenetic systematics) tries to reflect phylogeny in its classifications by only recognizing groups based on shared, derived characters (synapomorphies); evolutionary taxonomy tries to take into account both the branching pattern and "degree of difference" to find a compromise between them.
Inference of a phylogenetic tree
Usual methods of phylogenetic inference involve computational approaches implementing the optimality criteria and methods of parsimony, maximum likelihood (ML), and MCMC-based Bayesian inference. All these depend upon an implicit or explicit mathematical model describing the evolution of characters observed.
Phenetics, popular in the mid-20th century but now largely obsolete, used distance matrix-based methods to construct trees based on overall similarity in morphology or similar observable traits (i.e. in the phenotype or the overall similarity of DNA, not the DNA sequence), which was often assumed to approximate phylogenetic relationships.
Prior to 1950, phylogenetic inferences were generally presented as narrative scenarios. Such methods are often ambiguous and lack explicit criteria for evaluating alternative hypotheses.
Impacts of taxon sampling
In phylogenetic analysis, taxon sampling selects a small group of taxa to represent the evolutionary history of its broader population. This process is also known as stratified sampling or clade-based sampling. The practice occurs given limited resources to compare and analyze every species within a target population. Based on the representative group selected, the construction and accuracy of phylogenetic trees vary, which impacts derived phylogenetic inferences.
Unavailable datasets, such as an organism's incomplete DNA and protein amino acid sequences in genomic databases, directly restrict taxonomic sampling. Consequently, a significant source of error within phylogenetic analysis occurs due to inadequate taxon samples. Accuracy may be improved by increasing the number of genetic samples within its monophyletic group. Conversely, increasing sampling from outgroups extraneous to the target stratified population may decrease accuracy. Long branch attraction is an attributed theory for this occurrence, where nonrelated branches are incorrectly classified together, insinuating a shared evolutionary history.
There are debates if increasing the number of taxa sampled improves phylogenetic accuracy more than increasing the number of genes sampled per taxon. Differences in each method's sampling impact the number of nucleotide sites utilized in a sequence alignment, which may contribute to disagreements. For example, phylogenetic trees constructed utilizing a more significant number of total nucleotides are generally more accurate, as supported by phylogenetic trees' bootstrapping replicability from random sampling.
The graphic presented in Taxon Sampling, Bioinformatics, and Phylogenomics, compares the correctness of phylogenetic trees generated using fewer taxa and more sites per taxon on the x-axis to more taxa and fewer sites per taxon on the y-axis. With fewer taxa, more genes are sampled amongst the taxonomic group; in comparison, with more taxa added to the taxonomic sampling group, fewer genes are sampled. Each method has the same total number of nucleotide sites sampled. Furthermore, the dotted line represents a 1:1 accuracy between the two sampling methods. As seen in the graphic, most of the plotted points are located below the dotted line, which indicates gravitation toward increased accuracy when sampling fewer taxa with more sites per taxon. The research performed utilizes four different phylogenetic tree construction models to verify the theory; neighbor-joining (NJ), minimum evolution (ME), unweighted maximum parsimony (MP), and maximum likelihood (ML). In the majority of models, sampling fewer taxon with more sites per taxon demonstrated higher accuracy.
Generally, with the alignment of a relatively equal number of total nucleotide sites, sampling more genes per taxon has higher bootstrapping replicability than sampling more taxa. However, unbalanced datasets within genomic databases make increasing the gene comparison per taxon in uncommonly sampled organisms increasingly difficult.
History
Overview
The term "phylogeny" derives from the German , introduced by Haeckel in 1866, and the Darwinian approach to classification became known as the "phyletic" approach. It can be traced back to Aristotle, who wrote in his Posterior Analytics, "We may assume the superiority ceteris paribus [other things being equal] of the demonstration which derives from fewer postulates or hypotheses."
Ernst Haeckel's recapitulation theory
The modern concept of phylogenetics evolved primarily as a disproof of a previously widely accepted theory. During the late 19th century, Ernst Haeckel's recapitulation theory, or "biogenetic fundamental law", was widely popular. It was often expressed as "ontogeny recapitulates phylogeny", i.e. the development of a single organism during its lifetime, from germ to adult, successively mirrors the adult stages of successive ancestors of the species to which it belongs. But this theory has long been rejected. Instead, ontogeny evolves – the phylogenetic history of a species cannot be read directly from its ontogeny, as Haeckel thought would be possible, but characters from ontogeny can be (and have been) used as data for phylogenetic analyses; the more closely related two species are, the more apomorphies their embryos share.
Timeline of key points
14th century, lex parsimoniae (parsimony principle), William of Ockam, English philosopher, theologian, and Franciscan friar, but the idea actually goes back to Aristotle, as a precursor concept. He introduced the concept of Occam's razor, which is the problem solving principle that recommends searching for explanations constructed with the smallest possible set of elements. Though he did not use these exact words, the principle can be summarized as "Entities must not be multiplied beyond necessity." The principle advocates that when presented with competing hypotheses about the same prediction, one should prefer the one that requires fewest assumptions.
1763, Bayesian probability, Rev. Thomas Bayes, a precursor concept. Bayesian probability began a resurgence in the 1950s, allowing scientists in the computing field to pair traditional Bayesian statistics with other more modern techniques. It is now used as a blanket term for several related interpretations of probability as an amount of epistemic confidence.
18th century, Pierre Simon (Marquis de Laplace), perhaps first to use ML (maximum likelihood), precursor concept. His work gave way to the Laplace distribution, which can be directly linked to least absolute deviations.
1809, evolutionary theory, Philosophie Zoologique, Jean-Baptiste de Lamarck, precursor concept, foreshadowed in the 17th century and 18th century by Voltaire, Descartes, and Leibniz, with Leibniz even proposing evolutionary changes to account for observed gaps suggesting that many species had become extinct, others transformed, and different species that share common traits may have at one time been a single race, also foreshadowed by some early Greek philosophers such as Anaximander in the 6th century BC and the atomists of the 5th century BC, who proposed rudimentary theories of evolution
1837, Darwin's notebooks show an evolutionary tree
1840, American Geologist Edward Hitchcock published what is considered to be the first paleontological "Tree of Life". Many critiques, modifications, and explanations would follow.
1843, distinction between homology and analogy (the latter now referred to as homoplasy), Richard Owen, precursor concept. Homology is the term used to characterize the similarity of features that can be parsimoniously explained by common ancestry. Homoplasy is the term used to describe a feature that has been gained or lost independently in separate lineages over the course of evolution.
1858, Paleontologist Heinrich Georg Bronn (1800–1862) published a hypothetical tree to illustrating the paleontological "arrival" of new, similar species. following the extinction of an older species. Bronn did not propose a mechanism responsible for such phenomena, precursor concept.
1858, elaboration of evolutionary theory, Darwin and Wallace, also in Origin of Species by Darwin the following year, precursor concept.
1866, Ernst Haeckel, first publishes his phylogeny-based evolutionary tree, precursor concept. Haeckel introduces the now-disproved recapitulation theory. He introduced the term "Cladus" as a taxonomic category just below subphylum.
1893, Dollo's Law of Character State Irreversibility, precursor concept. Dollo's Law of Irreversibility states that "an organism never comes back exactly to its previous state due to the indestructible nature of the past, it always retains some trace of the transitional stages through which it has passed."
1912, ML (maximum likelihood recommended, analyzed, and popularized by Ronald Fisher, precursor concept. Fisher is one of the main contributors to the early 20th-century revival of Darwinism, and has been called the "greatest of Darwin's successors" for his contributions to the revision of the theory of evolution and his use of mathematics to combine Mendelian genetics and natural selection in the 20th century "modern synthesis".
1921, Tillyard uses term "phylogenetic" and distinguishes between archaic and specialized characters in his classification system.
1940, Lucien Cuénot coined the term "clade" in 1940: "terme nouveau de clade (du grec κλάδοςç, branche) [A new term clade (from the Greek word klados, meaning branch)]". He used it for evolutionary branching.
1947, Bernhard Rensch introduced the term Kladogenesis in his German book Neuere Probleme der Abstammungslehre Die transspezifische Evolution, translated into English in 1959 as Evolution Above the Species Level (still using the same spelling).
1949, Jackknife resampling, Maurice Quenouille (foreshadowed in '46 by Mahalanobis and extended in '58 by Tukey), precursor concept.
1950, Willi Hennig's classic formalization. Hennig is considered the founder of phylogenetic systematics, and published his first works in German of this year. He also asserted a version of the parsimony principle, stating that the presence of amorphous characters in different species 'is always reason for suspecting kinship, and that their origin by convergence should not be presumed a priori'. This has been considered a foundational view of phylogenetic inference.
1952, William Wagner's ground plan divergence method.
1957, Julian Huxley adopted Rensch's terminology as "cladogenesis" with a full definition: "Cladogenesis I have taken over directly from Rensch, to denote all splitting, from subspeciation through adaptive radiation to the divergence of phyla and kingdoms." With it he introduced the word "clades", defining it as: "Cladogenesis results in the formation of delimitable monophyletic units, which may be called clades."
1960, Arthur Cain and Geoffrey Ainsworth Harrison coined "cladistic" to mean evolutionary relationship,
1963, first attempt to use ML (maximum likelihood) for phylogenetics, Edwards and Cavalli-Sforza.
1965
Camin-Sokal parsimony, first parsimony (optimization) criterion and first computer program/algorithm for cladistic analysis both by Camin and Sokal.
Character compatibility method, also called clique analysis, introduced independently by Camin and Sokal (loc. cit.) and E. O. Wilson.
1966
English translation of Hennig.
"Cladistics" and "cladogram" coined (Webster's, loc. cit.)
1969
Dynamic and successive weighting, James Farris.
Wagner parsimony, Kluge and Farris.
CI (consistency index), Kluge and Farris.
Introduction of pairwise compatibility for clique analysis, Le Quesne.
1970, Wagner parsimony generalized by Farris.
1971
First successful application of ML (maximum likelihood) to phylogenetics (for protein sequences), Neyman.
Fitch parsimony, Walter M. Fitch. These gave way to the most basic ideas of maximum parsimony. Fitch is known for his work on reconstructing phylogenetic trees from protein and DNA sequences. His definition of orthologous sequences has been referenced in many research publications.
NNI (nearest neighbour interchange), first branch-swapping search strategy, developed independently by Robinson and Moore et al.
ME (minimum evolution), Kidd and Sgaramella-Zonta (it is unclear if this is the pairwise distance method or related to ML as Edwards and Cavalli-Sforza call ML "minimum evolution").
1972, Adams consensus, Adams.
1976, prefix system for ranks, Farris.
1977, Dollo parsimony, Farris.
1979
Nelson consensus, Nelson.
MAST (maximum agreement subtree)((GAS) greatest agreement subtree), a consensus method, Gordon.
Bootstrap, Bradley Efron, precursor concept.
1980, PHYLIP, first software package for phylogenetic analysis, Joseph Felsenstein. A free computational phylogenetics package of programs for inferring evolutionary trees (phylogenies). One such example tree created by PHYLIP, called a "drawgram", generates rooted trees. This image shown in the figure below shows the evolution of phylogenetic trees over time.
1981
Majority consensus, Margush and MacMorris.
Strict consensus, Sokal and Rohlffirst computationally efficient ML (maximum likelihood) algorithm. Felsenstein created the Felsenstein Maximum Likelihood method, used for the inference of phylogeny which evaluates a hypothesis about evolutionary history in terms of the probability that the proposed model and the hypothesized history would give rise to the observed data set.
1982
PHYSIS, Mikevich and Farris
Branch and bound, Hendy and Penny
1985
First cladistic analysis of eukaryotes based on combined phenotypic and genotypic evidence Diana Lipscomb.
First issue of Cladistics.
First phylogenetic application of bootstrap, Felsenstein.
First phylogenetic application of jackknife, Scott Lanyon.
1986, MacClade, Maddison and Maddison.
1987, neighbor-joining method Saitou and Nei
1988, Hennig86 (version 1.5), Farris
Bremer support (decay index), Bremer.
1989
RI (retention index), RCI (rescaled consistency index), Farris.
HER (homoplasy excess ratio), Archie.
1990
combinable components (semi-strict) consensus, Bremer.
SPR (subtree pruning and regrafting), TBR (tree bisection and reconnection), Swofford and Olsen.
1991
DDI (data decisiveness index), Goloboff.
First cladistic analysis of eukaryotes based only on phenotypic evidence, Lipscomb.
1993, implied weighting Goloboff.
1994, reduced consensus: RCC (reduced cladistic consensus) for rooted trees, Wilkinson.
1995, reduced consensus RPC (reduced partition consensus) for unrooted trees, Wilkinson.
1996, first working methods for BI (Bayesian Inference) independently developed by Li, Mau, and Rannala and Yang and all using MCMC (Markov chain-Monte Carlo).
1998, TNT (Tree Analysis Using New Technology), Goloboff, Farris, and Nixon.
1999, Winclada, Nixon.
2003, symmetrical resampling, Goloboff.
2004, 2005, similarity metric (using an approximation to Kolmogorov complexity) or NCD (normalized compression distance), Li et al., Cilibrasi and Vitanyi.
Uses of phylogenetic analysis
Pharmacology
One use of phylogenetic analysis involves the pharmacological examination of closely related groups of organisms. Advances in cladistics analysis through faster computer programs and improved molecular techniques have increased the precision of phylogenetic determination, allowing for the identification of species with pharmacological potential.
Historically, phylogenetic screens for pharmacological purposes were used in a basic manner, such as studying the Apocynaceae family of plants, which includes alkaloid-producing species like Catharanthus, known for producing vincristine, an antileukemia drug. Modern techniques now enable researchers to study close relatives of a species to uncover either a higher abundance of important bioactive compounds (e.g., species of Taxus for taxol) or natural variants of known pharmaceuticals (e.g., species of Catharanthus for different forms of vincristine or vinblastine).
Biodiversity
Phylogenetic analysis has also been applied to biodiversity studies within the fungi family. Phylogenetic analysis helps understand the evolutionary history of various groups of organisms, identify relationships between different species, and predict future evolutionary changes. Emerging imagery systems and new analysis techniques allow for the discovery of more genetic relationships in biodiverse fields, which can aid in conservation efforts by identifying rare species that could benefit ecosystems globally.
Infectious disease epidemiology
Whole-genome sequence data from outbreaks or epidemics of infectious diseases can provide important insights into transmission dynamics and inform public health strategies. Traditionally, studies have combined genomic and epidemiological data to reconstruct transmission events. However, recent research has explored deducing transmission patterns solely from genomic data using phylodynamics, which involves analyzing the properties of pathogen phylogenies. Phylodynamics uses theoretical models to compare predicted branch lengths with actual branch lengths in phylogenies to infer transmission patterns. Additionally, coalescent theory, which describes probability distributions on trees based on population size, has been adapted for epidemiological purposes. Another source of information within phylogenies that has been explored is "tree shape." These approaches, while computationally intensive, have the potential to provide valuable insights into pathogen transmission dynamics.
The structure of the host contact network significantly impacts the dynamics of outbreaks, and management strategies rely on understanding these transmission patterns. Pathogen genomes spreading through different contact network structures, such as chains, homogeneous networks, or networks with super-spreaders, accumulate mutations in distinct patterns, resulting in noticeable differences in the shape of phylogenetic trees, as illustrated in Fig. 1. Researchers have analyzed the structural characteristics of phylogenetic trees generated from simulated bacterial genome evolution across multiple types of contact networks. By examining simple topological properties of these trees, researchers can classify them into chain-like, homogeneous, or super-spreading dynamics, revealing transmission patterns. These properties form the basis of a computational classifier used to analyze real-world outbreaks. Computational predictions of transmission dynamics for each outbreak often align with known epidemiological data.
Different transmission networks result in quantitatively different tree shapes. To determine whether tree shapes captured information about underlying disease transmission patterns, researchers simulated the evolution of a bacterial genome over three types of outbreak contact networks—homogeneous, super-spreading, and chain-like. They summarized the resulting phylogenies with five metrics describing tree shape. Figures 2 and 3 illustrate the distributions of these metrics across the three types of outbreaks, revealing clear differences in tree topology depending on the underlying host contact network.
Super-spreader networks give rise to phylogenies with higher Colless imbalance, longer ladder patterns, lower Δw, and deeper trees than those from homogeneous contact networks. Trees from chain-like networks are less variable, deeper, more imbalanced, and narrower than those from other networks.
Scatter plots can be used to visualize the relationship between two variables in pathogen transmission analysis, such as the number of infected individuals and the time since infection. These plots can help identify trends and patterns, such as whether the spread of the pathogen is increasing or decreasing over time, and can highlight potential transmission routes or super-spreader events. Box plots displaying the range, median, quartiles, and potential outliers datasets can also be valuable for analyzing pathogen transmission data, helping to identify important features in the data distribution. They may be used to quickly identify differences or similarities in the transmission data.
Disciplines other than biology
Phylogenetic tools and representations (trees and networks) can also be applied to philology, the study of the evolution of oral languages and written text and manuscripts, such as in the field of quantitative comparative linguistics.
Computational phylogenetics can be used to investigate a language as an evolutionary system. The evolution of human language closely corresponds with human's biological evolution which allows phylogenetic methods to be applied. The concept of a "tree" serves as an efficient way to represent relationships between languages and language splits. It also serves as a way of testing hypotheses about the connections and ages of language families. For example, relationships among languages can be shown by using cognates as characters. The phylogenetic tree of Indo-European languages shows the relationships between several of the languages in a timeline, as well as the similarity between words and word order.
There are three types of criticisms about using phylogenetics in philology, the first arguing that languages and species are different entities, therefore you can not use the same methods to study both. The second being how phylogenetic methods are being applied to linguistic data. And the third, discusses the types of data that is being used to construct the trees.
Bayesian phylogenetic methods, which are sensitive to how treelike the data is, allow for the reconstruction of relationships among languages, locally and globally. The main two reasons for the use of Bayesian phylogenetics are that (1) diverse scenarios can be included in calculations and (2) the output is a sample of trees and not a single tree with true claim.
The same process can be applied to texts and manuscripts. In Paleography, the study of historical writings and manuscripts, texts were replicated by scribes who copied from their source and alterations - i.e., 'mutations' - occurred when the scribe did not precisely copy the source.
Phylogenetics has been applied to archaeological artefacts such as the early hominin hand-axes, late Palaeolithic figurines, Neolithic stone arrowheads, Bronze Age ceramics, and historical-period houses. Bayesian methods have also been employed by archaeologists in an attempt to quantify uncertainty in the tree topology and divergence times of stone projectile point shapes in the European Final Palaeolithic and earliest Mesolithic.
See also
Angiosperm Phylogeny Group
Bauplan
Bioinformatics
Biomathematics
Coalescent theory
EDGE of Existence programme
Evolutionary taxonomy
Language family
Maximum parsimony
Microbial phylogenetics
Molecular phylogeny
Ontogeny
PhyloCode
Phylodynamics
Phylogenesis
Phylogenetic comparative methods
Phylogenetic network
Phylogenetic nomenclature
Phylogenetic tree viewers
Phylogenetics software
Phylogenomics
Phylogeny (psychoanalysis)
Phylogeography
Systematics
References
Bibliography
External links | 0.792533 | 0.997117 | 0.790248 |
Density functional theory | Density functional theory (DFT) is a computational quantum mechanical modelling method used in physics, chemistry and materials science to investigate the electronic structure (or nuclear structure) (principally the ground state) of many-body systems, in particular atoms, molecules, and the condensed phases. Using this theory, the properties of a many-electron system can be determined by using functionals - that is, functions that accept a function as input and output a single real number as an output. In the case of DFT, these are functionals of the spatially dependent electron density. DFT is among the most popular and versatile methods available in condensed-matter physics, computational physics, and computational chemistry.
DFT has been very popular for calculations in solid-state physics since the 1970s. However, DFT was not considered accurate enough for calculations in quantum chemistry until the 1990s, when the approximations used in the theory were greatly refined to better model the exchange and correlation interactions. Computational costs are relatively low when compared to traditional methods, such as exchange only Hartree–Fock theory and its descendants that include electron correlation. Since, DFT has become an important tool for methods of nuclear spectroscopy such as Mössbauer spectroscopy or perturbed angular correlation, in order to understand the origin of specific electric field gradients in crystals.
Despite recent improvements, there are still difficulties in using density functional theory to properly describe: intermolecular interactions (of critical importance to understanding chemical reactions), especially van der Waals forces (dispersion); charge transfer excitations; transition states, global potential energy surfaces, dopant interactions and some strongly correlated systems; and in calculations of the band gap and ferromagnetism in semiconductors. The incomplete treatment of dispersion can adversely affect the accuracy of DFT (at least when used alone and uncorrected) in the treatment of systems which are dominated by dispersion (e.g. interacting noble gas atoms) or where dispersion competes significantly with other effects (e.g. in biomolecules). The development of new DFT methods designed to overcome this problem, by alterations to the functional or by the inclusion of additive terms, is a current research topic. Classical density functional theory uses a similar formalism to calculate the properties of non-uniform classical fluids.
Despite the current popularity of these alterations or of the inclusion of additional terms, they are reported to stray away from the search for the exact functional. Further, DFT potentials obtained with adjustable parameters are no longer true DFT potentials, given that they are not functional derivatives of the exchange correlation energy with respect to the charge density. Consequently, it is not clear if the second theorem of DFT holds in such conditions.
Overview of method
In the context of computational materials science, ab initio (from first principles) DFT calculations allow the prediction and calculation of material behavior on the basis of quantum mechanical considerations, without requiring higher-order parameters such as fundamental material properties. In contemporary DFT techniques the electronic structure is evaluated using a potential acting on the system's electrons. This DFT potential is constructed as the sum of external potentials , which is determined solely by the structure and the elemental composition of the system, and an effective potential , which represents interelectronic interactions. Thus, a problem for a representative supercell of a material with electrons can be studied as a set of one-electron Schrödinger-like equations, which are also known as Kohn–Sham equations.
Origins
Although density functional theory has its roots in the Thomas–Fermi model for the electronic structure of materials, DFT was first put on a firm theoretical footing by Walter Kohn and Pierre Hohenberg in the framework of the two Hohenberg–Kohn theorems (HK). The original HK theorems held only for non-degenerate ground states in the absence of a magnetic field, although they have since been generalized to encompass these.
The first HK theorem demonstrates that the ground-state properties of a many-electron system are uniquely determined by an electron density that depends on only three spatial coordinates. It set down the groundwork for reducing the many-body problem of electrons with spatial coordinates to three spatial coordinates, through the use of functionals of the electron density. This theorem has since been extended to the time-dependent domain to develop time-dependent density functional theory (TDDFT), which can be used to describe excited states.
The second HK theorem defines an energy functional for the system and proves that the ground-state electron density minimizes this energy functional.
In work that later won them the Nobel prize in chemistry, the HK theorem was further developed by Walter Kohn and Lu Jeu Sham to produce Kohn–Sham DFT (KS DFT). Within this framework, the intractable many-body problem of interacting electrons in a static external potential is reduced to a tractable problem of noninteracting electrons moving in an effective potential. The effective potential includes the external potential and the effects of the Coulomb interactions between the electrons, e.g., the exchange and correlation interactions. Modeling the latter two interactions becomes the difficulty within KS DFT. The simplest approximation is the local-density approximation (LDA), which is based upon exact exchange energy for a uniform electron gas, which can be obtained from the Thomas–Fermi model, and from fits to the correlation energy for a uniform electron gas. Non-interacting systems are relatively easy to solve, as the wavefunction can be represented as a Slater determinant of orbitals. Further, the kinetic energy functional of such a system is known exactly. The exchange–correlation part of the total energy functional remains unknown and must be approximated.
Another approach, less popular than KS DFT but arguably more closely related to the spirit of the original HK theorems, is orbital-free density functional theory (OFDFT), in which approximate functionals are also used for the kinetic energy of the noninteracting system.
Derivation and formalism
As usual in many-body electronic structure calculations, the nuclei of the treated molecules or clusters are seen as fixed (the Born–Oppenheimer approximation), generating a static external potential , in which the electrons are moving. A stationary electronic state is then described by a wavefunction satisfying the many-electron time-independent Schrödinger equation
where, for the -electron system, is the Hamiltonian, is the total energy, is the kinetic energy, is the potential energy from the external field due to positively charged nuclei, and is the electron–electron interaction energy. The operators and are called universal operators, as they are the same for any -electron system, while is system-dependent. This complicated many-particle equation is not separable into simpler single-particle equations because of the interaction term .
There are many sophisticated methods for solving the many-body Schrödinger equation based on the expansion of the wavefunction in Slater determinants. While the simplest one is the Hartree–Fock method, more sophisticated approaches are usually categorized as post-Hartree–Fock methods. However, the problem with these methods is the huge computational effort, which makes it virtually impossible to apply them efficiently to larger, more complex systems.
Here DFT provides an appealing alternative, being much more versatile, as it provides a way to systematically map the many-body problem, with , onto a single-body problem without . In DFT the key variable is the electron density , which for a normalized is given by
This relation can be reversed, i.e., for a given ground-state density it is possible, in principle, to calculate the corresponding ground-state wavefunction . In other words, is a unique functional of ,
and consequently the ground-state expectation value of an observable is also a functional of :
In particular, the ground-state energy is a functional of :
where the contribution of the external potential can be written explicitly in terms of the ground-state density :
More generally, the contribution of the external potential can be written explicitly in terms of the density :
The functionals and are called universal functionals, while is called a non-universal functional, as it depends on the system under study. Having specified a system, i.e., having specified , one then has to minimize the functional
with respect to , assuming one has reliable expressions for and . A successful minimization of the energy functional will yield the ground-state density and thus all other ground-state observables.
The variational problems of minimizing the energy functional can be solved by applying the Lagrangian method of undetermined multipliers. First, one considers an energy functional that does not explicitly have an electron–electron interaction energy term,
where denotes the kinetic-energy operator, and is an effective potential in which the particles are moving. Based on , Kohn–Sham equations of this auxiliary noninteracting system can be derived:
which yields the orbitals that reproduce the density of the original many-body system
The effective single-particle potential can be written as
where is the external potential, the second term is the Hartree term describing the electron–electron Coulomb repulsion, and the last term is the exchange–correlation potential. Here, includes all the many-particle interactions. Since the Hartree term and depend on , which depends on the , which in turn depend on , the problem of solving the Kohn–Sham equation has to be done in a self-consistent (i.e., iterative) way. Usually one starts with an initial guess for , then calculates the corresponding and solves the Kohn–Sham equations for the . From these one calculates a new density and starts again. This procedure is then repeated until convergence is reached. A non-iterative approximate formulation called Harris functional DFT is an alternative approach to this.
Notes
The one-to-one correspondence between electron density and single-particle potential is not so smooth. It contains kinds of non-analytic structure. contains kinds of singularities, cuts and branches. This may indicate a limitation of our hope for representing exchange–correlation functional in a simple analytic form.
It is possible to extend the DFT idea to the case of the Green function instead of the density . It is called as Luttinger–Ward functional (or kinds of similar functionals), written as . However, is determined not as its minimum, but as its extremum. Thus we may have some theoretical and practical difficulties.
There is no one-to-one correspondence between one-body density matrix and the one-body potential . (All the eigenvalues of are 1.) In other words, it ends up with a theory similar to the Hartree–Fock (or hybrid) theory.
Relativistic formulation (ab initio functional forms)
The same theorems can be proven in the case of relativistic electrons, thereby providing generalization of DFT for the relativistic case. Unlike the nonrelativistic theory, in the relativistic case it is possible to derive a few exact and explicit formulas for the relativistic density functional.
Let one consider an electron in the hydrogen-like ion obeying the relativistic Dirac equation. The Hamiltonian for a relativistic electron moving in the Coulomb potential can be chosen in the following form (atomic units are used):
where is the Coulomb potential of a pointlike nucleus, is a momentum operator of the electron, and , and are the elementary charge, electron mass and the speed of light respectively, and finally and are a set of Dirac 2 × 2 matrices:
To find out the eigenfunctions and corresponding energies, one solves the eigenfunction equation
where is a four-component wavefunction, and is the associated eigenenergy. It is demonstrated in Brack (1983) that application of the virial theorem to the eigenfunction equation produces the following formula for the eigenenergy of any bound state:
and analogously, the virial theorem applied to the eigenfunction equation with the square of the Hamiltonian yields
It is easy to see that both of the above formulae represent density functionals. The former formula can be easily generalized for the multi-electron case.
One may observe that both of the functionals written above do not have extremals, of course, if a reasonably wide set of functions is allowed for variation. Nevertheless, it is possible to design a density functional with desired extremal properties out of those ones. Let us make it in the following way:
where in Kronecker delta symbol of the second term denotes any extremal for the functional represented by the first term of the functional . The second term amounts to zero for any function that is not an extremal for the first term of functional . To proceed further we'd like to find Lagrange equation for this functional. In order to do this, we should allocate a linear part of functional increment when the argument function is altered:
Deploying written above equation, it is easy to find the following formula for functional derivative:
where , and , and is a value of potential at some point, specified by support of variation function , which is supposed to be infinitesimal. To advance toward Lagrange equation, we equate functional derivative to zero and after simple algebraic manipulations arrive to the following equation:
Apparently, this equation could have solution only if . This last condition provides us with Lagrange
equation for functional , which could be finally written down in the following form:
Solutions of this equation represent extremals for functional . It's easy to see that all real densities,
that is, densities corresponding to the bound states of the system in question, are solutions of written above equation, which could be called the Kohn–Sham equation in this particular case. Looking back onto the definition of the functional , we clearly see that the functional produces energy of the system for appropriate density, because the first term amounts to zero for such density and the second one delivers the energy value.
Approximations (exchange–correlation functionals)
The major problem with DFT is that the exact functionals for exchange and correlation are not known, except for the free-electron gas. However, approximations exist which permit the calculation of certain physical quantities quite accurately. One of the simplest approximations is the local-density approximation (LDA), where the functional depends only on the density at the coordinate where the functional is evaluated:
The local spin-density approximation (LSDA) is a straightforward generalization of the LDA to include electron spin:
In LDA, the exchange–correlation energy is typically separated into the exchange part and the correlation part: . The exchange part is called the Dirac (or sometimes Slater) exchange, which takes the form . There are, however, many mathematical forms for the correlation part. Highly accurate formulae for the correlation energy density have been constructed from quantum Monte Carlo simulations of jellium. A simple first-principles correlation functional has been recently proposed as well. Although unrelated to the Monte Carlo simulation, the two variants provide comparable accuracy.
The LDA assumes that the density is the same everywhere. Because of this, the LDA has a tendency to underestimate the exchange energy and over-estimate the correlation energy. The errors due to the exchange and correlation parts tend to compensate each other to a certain degree. To correct for this tendency, it is common to expand in terms of the gradient of the density in order to account for the non-homogeneity of the true electron density. This allows corrections based on the changes in density away from the coordinate. These expansions are referred to as generalized gradient approximations (GGA) and have the following form:
Using the latter (GGA), very good results for molecular geometries and ground-state energies have been achieved.
Potentially more accurate than the GGA functionals are the meta-GGA functionals, a natural development after the GGA (generalized gradient approximation). Meta-GGA DFT functional in its original form includes the second derivative of the electron density (the Laplacian), whereas GGA includes only the density and its first derivative in the exchange–correlation potential.
Functionals of this type are, for example, TPSS and the Minnesota Functionals. These functionals include a further term in the expansion, depending on the density, the gradient of the density and the Laplacian (second derivative) of the density.
Difficulties in expressing the exchange part of the energy can be relieved by including a component of the exact exchange energy calculated from Hartree–Fock theory. Functionals of this type are known as hybrid functionals.
Generalizations to include magnetic fields
The DFT formalism described above breaks down, to various degrees, in the presence of a vector potential, i.e. a magnetic field. In such a situation, the one-to-one mapping between the ground-state electron density and wavefunction is lost. Generalizations to include the effects of magnetic fields have led to two different theories: current density functional theory (CDFT) and magnetic field density functional theory (BDFT). In both these theories, the functional used for the exchange and correlation must be generalized to include more than just the electron density. In current density functional theory, developed by Vignale and Rasolt, the functionals become dependent on both the electron density and the paramagnetic current density. In magnetic field density functional theory, developed by Salsbury, Grayce and Harris, the functionals depend on the electron density and the magnetic field, and the functional form can depend on the form of the magnetic field. In both of these theories it has been difficult to develop functionals beyond their equivalent to LDA, which are also readily implementable computationally.
Applications
In general, density functional theory finds increasingly broad application in chemistry and materials science for the interpretation and prediction of complex system behavior at an atomic scale. Specifically, DFT computational methods are applied for synthesis-related systems and processing parameters. In such systems, experimental studies are often encumbered by inconsistent results and non-equilibrium conditions. Examples of contemporary DFT applications include studying the effects of dopants on phase transformation behavior in oxides, magnetic behavior in dilute magnetic semiconductor materials, and the study of magnetic and electronic behavior in ferroelectrics and dilute magnetic semiconductors. It has also been shown that DFT gives good results in the prediction of sensitivity of some nanostructures to environmental pollutants like sulfur dioxide or acrolein, as well as prediction of mechanical properties.
In practice, Kohn–Sham theory can be applied in several distinct ways, depending on what is being investigated. In solid-state calculations, the local density approximations are still commonly used along with plane-wave basis sets, as an electron-gas approach is more appropriate for electrons delocalised through an infinite solid. In molecular calculations, however, more sophisticated functionals are needed, and a huge variety of exchange–correlation functionals have been developed for chemical applications. Some of these are inconsistent with the uniform electron-gas approximation; however, they must reduce to LDA in the electron-gas limit. Among physicists, one of the most widely used functionals is the revised Perdew–Burke–Ernzerhof exchange model (a direct generalized gradient parameterization of the free-electron gas with no free parameters); however, this is not sufficiently calorimetrically accurate for gas-phase molecular calculations. In the chemistry community, one popular functional is known as BLYP (from the name Becke for the exchange part and Lee, Yang and Parr for the correlation part). Even more widely used is B3LYP, which is a hybrid functional in which the exchange energy, in this case from Becke's exchange functional, is combined with the exact energy from Hartree–Fock theory. Along with the component exchange and correlation funсtionals, three parameters define the hybrid functional, specifying how much of the exact exchange is mixed in. The adjustable parameters in hybrid functionals are generally fitted to a "training set" of molecules. Although the results obtained with these functionals are usually sufficiently accurate for most applications, there is no systematic way of improving them (in contrast to some of the traditional wavefunction-based methods like configuration interaction or coupled cluster theory). In the current DFT approach it is not possible to estimate the error of the calculations without comparing them to other methods or experiments.
Density functional theory is generally highly accurate but highly computationally-expensive. In recent years, DFT has been used with machine learning techniques - especially graph neural networks - to create machine learning potentials. These graph neural networks approximate DFT, with the aim of achieving similar accuracies with much less computation, and are especially beneficial for large systems. They are trained using DFT-calculated properties of a known set of molecules. Researchers have been trying to approximate DFT with machine learning for decades, but have only recently made good estimators. Breakthroughs in model architecture and data preprocessing that more heavily encoded theoretical knowledge, especially regarding symmetries and invariances, have enabled huge leaps in model performance. Using backpropagation, the process by which neural networks learn from training errors, to extract meaningful information about forces and densities, has similarly improved machine learning potentials accuracy. By 2023, for example, the DFT approximator Matlantis could simulate 72 elements, handle up to 20,000 atoms at a time, and execute calculations up to 20,000,000 times faster than DFT with similar accuracy, showcasing the power of DFT approximators in the artificial intelligence age. ML approximations of DFT have historically faced substantial transferability issues, with models failing to generalize potentials from some types of elements and compounds to others; improvements in architecture and data have slowly mitigated, but not eliminated, this issue. For very large systems, electrically nonneutral simulations, and intricate reaction pathways, DFT approximators often remain insufficiently computationally-lightweight or insufficiently accurate.
Thomas–Fermi model
The predecessor to density functional theory was the Thomas–Fermi model, developed independently by both Llewellyn Thomas and Enrico Fermi in 1927. They used a statistical model to approximate the distribution of electrons in an atom. The mathematical basis postulated that electrons are distributed uniformly in phase space with two electrons in every of volume. For each element of coordinate space volume we can fill out a sphere of momentum space up to the Fermi momentum
Equating the number of electrons in coordinate space to that in phase space gives
Solving for and substituting into the classical kinetic energy formula then leads directly to a kinetic energy represented as a functional of the electron density:
where
As such, they were able to calculate the energy of an atom using this kinetic-energy functional combined with the classical expressions for the nucleus–electron and electron–electron interactions (which can both also be represented in terms of the electron density).
Although this was an important first step, the Thomas–Fermi equation's accuracy is limited because the resulting kinetic-energy functional is only approximate, and because the method does not attempt to represent the exchange energy of an atom as a conclusion of the Pauli principle. An exchange-energy functional was added by Paul Dirac in 1928.
However, the Thomas–Fermi–Dirac theory remained rather inaccurate for most applications. The largest source of error was in the representation of the kinetic energy, followed by the errors in the exchange energy, and due to the complete neglect of electron correlation.
Edward Teller (1962) showed that Thomas–Fermi theory cannot describe molecular bonding. This can be overcome by improving the kinetic-energy functional.
The kinetic-energy functional can be improved by adding the von Weizsäcker (1935) correction:
Hohenberg–Kohn theorems
The Hohenberg–Kohn theorems relate to any system consisting of electrons moving under the influence of an external potential.
Theorem 1. The external potential (and hence the total energy), is a unique functional of the electron density.
If two systems of electrons, one trapped in a potential and the other in , have the same ground-state density , then is necessarily a constant.
Corollary 1: the ground-state density uniquely determines the potential and thus all properties of the system, including the many-body wavefunction. In particular, the HK functional, defined as , is a universal functional of the density (not depending explicitly on the external potential).
Corollary 2: In light of the fact that the sum of the occupied energies provides the energy content of the Hamiltonian, a unique functional of the ground state charge density, the spectrum of the Hamiltonian is also a unique functional of the ground state charge density.
Theorem 2. The functional that delivers the ground-state energy of the system gives the lowest energy if and only if the input density is the true ground-state density.
In other words, the energy content of the Hamiltonian reaches its absolute minimum, i.e., the ground state, when the charge density is that of the ground state.
For any positive integer and potential , a density functional exists such that
reaches its minimal value at the ground-state density of electrons in the potential . The minimal value of is then the ground-state energy of this system.
Pseudo-potentials
The many-electron Schrödinger equation can be very much simplified if electrons are divided in two groups: valence electrons and inner core electrons. The electrons in the inner shells are strongly bound and do not play a significant role in the chemical binding of atoms; they also partially screen the nucleus, thus forming with the nucleus an almost inert core. Binding properties are almost completely due to the valence electrons, especially in metals and semiconductors. This separation suggests that inner electrons can be ignored in a large number of cases, thereby reducing the atom to an ionic core that interacts with the valence electrons. The use of an effective interaction, a pseudopotential, that approximates the potential felt by the valence electrons, was first proposed by Fermi in 1934 and Hellmann in 1935. In spite of the simplification pseudo-potentials introduce in calculations, they remained forgotten until the late 1950s.
Ab initio pseudo-potentials
A crucial step toward more realistic pseudo-potentials was given by William C. Topp and John Hopfield, who suggested that the pseudo-potential should be adjusted such that they describe the valence charge density accurately. Based on that idea, modern pseudo-potentials are obtained inverting the free-atom Schrödinger equation for a given reference electronic configuration and forcing the pseudo-wavefunctions to coincide with the true valence wavefunctions beyond a certain distance . The pseudo-wavefunctions are also forced to have the same norm (i.e., the so-called norm-conserving condition) as the true valence wavefunctions and can be written as
where is the radial part of the wavefunction with angular momentum , and PP and AE denote the pseudo-wavefunction and the true (all-electron) wavefunction respectively. The index in the true wavefunctions denotes the valence level. The distance beyond which the true and the pseudo-wavefunctions are equal is also dependent on .
Electron smearing
The electrons of a system will occupy the lowest Kohn–Sham eigenstates up to a given energy level according to the Aufbau principle. This corresponds to the steplike Fermi–Dirac distribution at absolute zero. If there are several degenerate or close to degenerate eigenstates at the Fermi level, it is possible to get convergence problems, since very small perturbations may change the electron occupation. One way of damping these oscillations is to smear the electrons, i.e. allowing fractional occupancies. One approach of doing this is to assign a finite temperature to the electron Fermi–Dirac distribution. Other ways is to assign a cumulative Gaussian distribution of the electrons or using a Methfessel–Paxton method.
Classical density functional theory
Classical density functional theory is a classical statistical method to investigate the properties of many-body systems consisting of interacting molecules, macromolecules, nanoparticles or microparticles. The classical non-relativistic method is correct for classical fluids with particle velocities less than the speed of light and thermal de Broglie wavelength smaller than the distance between particles. The theory is based on the calculus of variations of a thermodynamic functional, which is a function of the spatially dependent density function of particles, thus the name. The same name is used for quantum DFT, which is the theory to calculate the electronic structure of electrons based on spatially dependent electron density with quantum and relativistic effects. Classical DFT is a popular and useful method to study fluid phase transitions, ordering in complex liquids, physical characteristics of interfaces and nanomaterials. Since the 1970s it has been applied to the fields of materials science, biophysics, chemical engineering and civil engineering. Computational costs are much lower than for molecular dynamics simulations, which provide similar data and a more detailed description but are limited to small systems and short time scales. Classical DFT is valuable to interpret and test numerical results and to define trends although details of the precise motion of the particles are lost due to averaging over all possible particle trajectories. As in electronic systems, there are fundamental and numerical difficulties in using DFT to quantitatively describe the effect of intermolecular interaction on structure, correlations and thermodynamic properties.
Classical DFT addresses the difficulty of describing thermodynamic equilibrium states of many-particle systems with nonuniform density. Classical DFT has its roots in theories such as the van der Waals theory for the equation of state and the virial expansion method for the pressure. In order to account for correlation in the positions of particles the direct correlation function was introduced as the effective interaction between two particles in the presence of a number of surrounding particles by Leonard Ornstein and Frits Zernike in 1914. The connection to the density pair distribution function was given by the Ornstein–Zernike equation. The importance of correlation for thermodynamic properties was explored through density distribution functions. The functional derivative was introduced to define the distribution functions of classical mechanical systems. Theories were developed for simple and complex liquids using the ideal gas as a basis for the free energy and adding molecular forces as a second-order perturbation. A term in the gradient of the density was added to account for non-uniformity in density in the presence of external fields or surfaces. These theories can be considered precursors of DFT.
To develop a formalism for the statistical thermodynamics of non-uniform fluids functional differentiation was used extensively by Percus and Lebowitz (1961), which led to the Percus–Yevick equation linking the density distribution function and the direct correlation. Other closure relations were also proposed;the Classical-map hypernetted-chain method, the BBGKY hierarchy. In the late 1970s classical DFT was applied to the liquid–vapor interface and the calculation of surface tension. Other applications followed: the freezing of simple fluids, formation of the glass phase, the crystal–melt interface and dislocation in crystals, properties of polymer systems, and liquid crystal ordering. Classical DFT was applied to colloid dispersions, which were discovered to be good models for atomic systems. By assuming local chemical equilibrium and using the local chemical potential of the fluid from DFT as the driving force in fluid transport equations, equilibrium DFT is extended to describe non-equilibrium phenomena and fluid dynamics on small scales.
Classical DFT allows the calculation of the equilibrium particle density and prediction of thermodynamic properties and behavior of a many-body system on the basis of model interactions between particles. The spatially dependent density determines the local structure and composition of the material. It is determined as a function that optimizes the thermodynamic potential of the grand canonical ensemble. The grand potential is evaluated as the sum of the ideal-gas term with the contribution from external fields and an excess thermodynamic free energy arising from interparticle interactions. In the simplest approach the excess free-energy term is expanded on a system of uniform density using a functional Taylor expansion. The excess free energy is then a sum of the contributions from s-body interactions with density-dependent effective potentials representing the interactions between s particles. In most calculations the terms in the interactions of three or more particles are neglected (second-order DFT). When the structure of the system to be studied is not well approximated by a low-order perturbation expansion with a uniform phase as the zero-order term, non-perturbative free-energy functionals have also been developed. The minimization of the grand potential functional in arbitrary local density functions for fixed chemical potential, volume and temperature provides self-consistent thermodynamic equilibrium conditions, in particular, for the local chemical potential. The functional is not in general a convex functional of the density; solutions may not be local minima. Limiting to low-order corrections in the local density is a well-known problem, although the results agree (reasonably) well on comparison to experiment.
A variational principle is used to determine the equilibrium density. It can be shown that for constant temperature and volume the correct equilibrium density minimizes the grand potential functional of the grand canonical ensemble over density functions . In the language of functional differentiation (Mermin theorem):
The Helmholtz free energy functional is defined as .
The functional derivative in the density function determines the local chemical potential: .
In classical statistical mechanics the partition function is a sum over probability for a given microstate of classical particles as measured by the Boltzmann factor in the Hamiltonian of the system. The Hamiltonian splits into kinetic and potential energy, which includes interactions between particles, as well as external potentials. The partition function of the grand canonical ensemble defines the grand potential. A correlation function is introduced to describe the effective interaction between particles.
The s-body density distribution function is defined as the statistical ensemble average of particle positions. It measures the probability to find s particles at points in space :
From the definition of the grand potential, the functional derivative with respect to the local chemical potential is the density; higher-order density correlations for two, three, four or more particles are found from higher-order derivatives:
The radial distribution function with s = 2 measures the change in the density at a given point for a change of the local chemical interaction at a distant point.
In a fluid the free energy is a sum of the ideal free energy and the excess free-energy contribution from interactions between particles. In the grand ensemble the functional derivatives in the density yield the direct correlation functions :
The one-body direct correlation function plays the role of an effective mean field. The functional derivative in density of the one-body direct correlation results in the direct correlation function between two particles . The direct correlation function is the correlation contribution to the change of local chemical potential at a point for a density change at and is related to the work of creating density changes at different positions. In dilute gases the direct correlation function is simply the pair-wise interaction between particles (Debye–Huckel equation). The Ornstein–Zernike equation between the pair and the direct correlation functions is derived from the equation
Various assumptions and approximations adapted to the system under study lead to expressions for the free energy. Correlation functions are used to calculate the free-energy functional as an expansion on a known reference system. If the non-uniform fluid can be described by a density distribution that is not far from uniform density a functional Taylor expansion of the free energy in density increments leads to an expression for the thermodynamic potential using known correlation functions of the uniform system. In the square gradient approximation a strong non-uniform density contributes a term in the gradient of the density. In a perturbation theory approach the direct correlation function is given by the sum of the direct correlation in a known system such as hard spheres and a term in a weak interaction such as the long range London dispersion force. In a local density approximation the local excess free energy is calculated from the effective interactions with particles distributed at uniform density of the fluid in a cell surrounding a particle. Other improvements have been suggested such as the weighted density approximation for a direct correlation function of a uniform system which distributes the neighboring particles with an effective weighted density calculated from a self-consistent condition on the direct correlation function.
The variational Mermin principle leads to an equation for the equilibrium density and system properties are calculated from the solution for the density. The equation is a non-linear integro-differential equation and finding a solution is not trivial, requiring numerical methods, except for the simplest models. Classical DFT is supported by standard software packages, and specific software is currently under development. Assumptions can be made to propose trial functions as solutions, and the free energy is expressed in the trial functions and optimized with respect to parameters of the trial functions. Examples are a localized Gaussian function centered on crystal lattice points for the density in a solid, the hyperbolic function for interfacial density profiles.
Classical DFT has found many applications, for example:
developing new functional materials in materials science, in particular nanotechnology;
studying the properties of fluids at surfaces and the phenomena of wetting and adsorption;
understanding life processes in biotechnology;
improving filtration methods for gases and fluids in chemical engineering;
fighting pollution of water and air in environmental science;
generating new procedures in microfluidics and nanofluidics.
The extension of classical DFT towards nonequilibrium systems is known as dynamical density functional theory (DDFT). DDFT allows to describe the time evolution of the one-body density of a colloidal system, which is governed by the equation
with the mobility and the free energy . DDFT can be derived from the microscopic equations of motion for a colloidal system (Langevin equations or Smoluchowski equation) based on the adiabatic approximation, which corresponds to the assumption that the two-body distribution in a nonequilibrium system is identical to that in an equilibrium system with the same one-body density. For a system of noninteracting particles, DDFT reduces to the standard diffusion equation.
See also
Basis set (chemistry)
Dynamical mean field theory
Gas in a box
Harris functional
Helium atom
Kohn–Sham equations
Local density approximation
Molecule
Molecular design software
Molecular modelling
Quantum chemistry
Thomas–Fermi model
Time-dependent density functional theory
Car–Parrinello molecular dynamics
Lists
List of quantum chemistry and solid state physics software
List of software for molecular mechanics modeling
References
Sources
External links
Walter Kohn, Nobel Laureate – Video interview with Walter on his work developing density functional theory by the Vega Science Trust
Walter Kohn, Nobel Lecture
Electron Density Functional Theory – Lecture Notes
Density Functional Theory through Legendre Transformation pdf
Modeling Materials Continuum, Atomistic and Multiscale Techniques, Book
NIST Jarvis-DFT
Electronic structure methods | 0.792035 | 0.997741 | 0.790246 |
Wet chemistry | Wet chemistry is a form of analytical chemistry that uses classical methods such as observation to analyze materials. The term wet chemistry is used as most analytical work is done in the liquid phase. Wet chemistry is also known as bench chemistry, since many tests are performed at lab benches.
Materials
Wet chemistry commonly uses laboratory glassware such as beakers and graduated cylinders to prevent materials from being contaminated or interfered with by unintended sources. Gasoline, Bunsen burners, and crucibles may also be used to evaporate and isolate substances in their dry forms. Wet chemistry is not performed with any advanced instruments since most automatically scan substances. Although, simple instruments such as scales are used to measure the weight of a substance before and after a change occurs. Many high school and college laboratories teach students basic wet chemistry methods.
History
Before the age of theoretical and computational chemistry, wet chemistry was the predominant form of scientific discovery in the chemical field. This is why it is sometimes referred to as classic chemistry or classical chemistry. Scientists would continuously develop techniques to improve the accuracy of wet chemistry. Later on, instruments were developed to conduct research impossible for wet chemistry. Over time, this became a separate branch of analytical chemistry called instrumental analysis. Because of the high volume of wet chemistry that must be done in today's society and new quality control requirements, many wet chemistry methods have been automated and computerized for streamlined analysis. The manual performance of wet chemistry mostly occurs in schools.
Methods
Qualitative methods
Qualitative methods use changes in information that cannot be quantified to detect a change. This can include a change in color, smell, texture, etc.
Chemical tests
Chemical tests use reagents to indicate the presence of a specific chemical in an unknown solution. The reagents cause a unique reaction to occur based on the chemical it reacts with, allowing one to know what chemical is in the solution. An example is Heller's test where a test tube containing proteins has strong acids added to it. A cloudy ring forms where the substances meet, indicating the acids are denaturing the proteins. The cloud is a sign that proteins are present in a liquid. The method is used to detect proteins in a person's urine.
Flame test
The flame test is a more well known version of the chemical test. It is only used on metallic ions. The metal powder is burned, causing an emission of colors based on what metal was burned. For example, calcium (Ca) will burn orange and copper (Cu) will burn blue. Their color emissions are used to produce bright colors in fireworks.
Quantitative methods
Quantitative methods use information that can be measured and quantified to indicate a change. This can include changes in volume, concentration, weight, etc.
Gravimetric analysis
Gravimetric analysis measures the weight or concentration of a solid that has either formed from a precipitate or dissolved in a liquid. The mass of the liquid is recorded before undergoing the reaction. For the precipitate, a reagent is added until the precipitate stops forming. The precipitate is then dried and weighed to determine the chemicals concentration in the liquid. For a dissolved substance, the liquid can be filtered until the solids are removed or boiled until all the liquid evaporates. The solids are left alone until completely dried and then weighed to determine its concentration. Evaporating all the liquid is the more common approach.
Volumetric analysis
Volumetric analysis or titration relies on volume measurements to determine the quantity of a chemical. A reagent with a known volume and concentration is added to a solution with an unknown substance or concentration. The amount of reagent required for a change to occur is proportional to the amount of the unknown substances. This reveals the amount of the unknown substance present. If no visible change is present, an indicator is added to the solution. For example, a pH indicator changes color based on the pH of the solution. The exact point where the color change occurs is called the endpoint. Since the color change can occur very suddenly, it is important to be extremely precise with all measurements.
Colorimetry
Colorimetry is a unique method since it has both qualitative and quantitative properties. Its qualitative analysis involves recording color changes to indicate a change has occurred. This can be a change in shading of the color or a change into a completely different color. The quantitative aspect involves sensory equipment that can measure the wavelength of colors. Changes in wavelengths can be precisely measured and indicate changes in the mixture or solution.
Uses
Wet chemistry techniques can be used for qualitative chemical measurements, such as changes in color (colorimetry), but often involves more quantitative chemical measurements, using methods such as gravimetry and titrimetry. Some uses for wet chemistry include tests for:
pH (acidity, alkalinity)
concentration
conductivity (specific conductance)
cloud point (nonionic surfactants)
hardness
melting point
solids or dissolved solids
salinity
specific gravity
density
turbidity
viscosity
moisture (Karl Fischer titration)
Wet chemistry is also used in environmental chemistry settings to determine the current state of the environment. It is used to test:
Biochemical Oxygen Demand (BOD)
Chemical Oxygen Demand (COD)
eutrophication
coating identification
It can also involve the elemental analysis of samples, e.g., water sources, for chemicals such as:
Ammonia nitrogen
Chloride
Chromium
Cyanide
Dissolved oxygen
Fluoride
Nitrogen
Nitrate
Phenols
Phosphate
Phosphorus
Silica
Sulfates
Sulfides
See also
Wet laboratory
Further reading
References
Chemistry | 0.808043 | 0.977853 | 0.790148 |
Pharmacology | Pharmacology is the science of drugs and medications, including a substance's origin, composition, pharmacokinetics, pharmacodynamics, therapeutic use, and toxicology. More specifically, it is the study of the interactions that occur between a living organism and chemicals that affect normal or abnormal biochemical function. If substances have medicinal properties, they are considered pharmaceuticals.
The field encompasses drug composition and properties, functions, sources, synthesis and drug design, molecular and cellular mechanisms, organ/systems mechanisms, signal transduction/cellular communication, molecular diagnostics, interactions, chemical biology, therapy, and medical applications and antipathogenic capabilities. The two main areas of pharmacology are pharmacodynamics and pharmacokinetics. Pharmacodynamics studies the effects of a drug on biological systems, and pharmacokinetics studies the effects of biological systems on a drug. In broad terms, pharmacodynamics discusses the chemicals with biological receptors, and pharmacokinetics discusses the absorption, distribution, metabolism, and excretion (ADME) of chemicals from the biological systems.
Pharmacology is not synonymous with pharmacy and the two terms are frequently confused. Pharmacology, a biomedical science, deals with the research, discovery, and characterization of chemicals which show biological effects and the elucidation of cellular and organismal function in relation to these chemicals. In contrast, pharmacy, a health services profession, is concerned with the application of the principles learned from pharmacology in its clinical settings; whether it be in a dispensing or clinical care role. In either field, the primary contrast between the two is their distinctions between direct-patient care, pharmacy practice, and the science-oriented research field, driven by pharmacology.
Etymology
The word pharmacology is derived from Greek word , pharmakon, meaning "drug" or "poison", together with another Greek word , logia with the meaning of "study of" or "knowledge of" (cf. the etymology of pharmacy). Pharmakon is related to pharmakos, the ritualistic sacrifice or exile of a human scapegoat or victim in Ancient Greek religion.
The modern term pharmacon is used more broadly than the term drug because it includes endogenous substances, and biologically active substances which are not used as drugs. Typically it includes pharmacological agonists and antagonists, but also enzyme inhibitors (such as monoamine oxidase inhibitors).
History
The origins of clinical pharmacology date back to the Middle Ages, with pharmacognosy and Avicenna's The Canon of Medicine, Peter of Spain's Commentary on Isaac, and John of St Amand's Commentary on the Antedotary of Nicholas. Early pharmacology focused on herbalism and natural substances, mainly plant extracts. Medicines were compiled in books called pharmacopoeias. Crude drugs have been used since prehistory as a preparation of substances from natural sources. However, the active ingredient of crude drugs are not purified and the substance is adulterated with other substances.
Traditional medicine varies between cultures and may be specific to a particular culture, such as in traditional Chinese, Mongolian, Tibetan and Korean medicine. However much of this has since been regarded as pseudoscience. Pharmacological substances known as entheogens may have spiritual and religious use and historical context.
In the 17th century, the English physician Nicholas Culpeper translated and used pharmacological texts. Culpeper detailed plants and the conditions they could treat. In the 18th century, much of clinical pharmacology was established by the work of William Withering. Pharmacology as a scientific discipline did not further advance until the mid-19th century amid the great biomedical resurgence of that period. Before the second half of the nineteenth century, the remarkable potency and specificity of the actions of drugs such as morphine, quinine and digitalis were explained vaguely and with reference to extraordinary chemical powers and affinities to certain organs or tissues. The first pharmacology department was set up by Rudolf Buchheim in 1847, at University of Tartu, in recognition of the need to understand how therapeutic drugs and poisons produced their effects. Subsequently, the first pharmacology department in England was set up in 1905 at University College London.
Pharmacology developed in the 19th century as a biomedical science that applied the principles of scientific experimentation to therapeutic contexts. The advancement of research techniques propelled pharmacological research and understanding. The development of the organ bath preparation, where tissue samples are connected to recording devices, such as a myograph, and physiological responses are recorded after drug application, allowed analysis of drugs' effects on tissues. The development of the ligand binding assay in 1945 allowed quantification of the binding affinity of drugs at chemical targets. Modern pharmacologists use techniques from genetics, molecular biology, biochemistry, and other advanced tools to transform information about molecular mechanisms and targets into therapies directed against disease, defects or pathogens, and create methods for preventive care, diagnostics, and ultimately personalized medicine.
Divisions
The discipline of pharmacology can be divided into many sub disciplines each with a specific focus.
Systems of the body
Pharmacology can also focus on specific systems comprising the body. Divisions related to bodily systems study the effects of drugs in different systems of the body. These include neuropharmacology, in the central and peripheral nervous systems; immunopharmacology in the immune system. Other divisions include cardiovascular, renal and endocrine pharmacology. Psychopharmacology is the study of the use of drugs that affect the psyche, mind and behavior (e.g. antidepressants) in treating mental disorders (e.g. depression). It incorporates approaches and techniques from neuropharmacology, animal behavior and behavioral neuroscience, and is interested in the behavioral and neurobiological mechanisms of action of psychoactive drugs. The related field of neuropsychopharmacology focuses on the effects of drugs at the overlap between the nervous system and the psyche.
Pharmacometabolomics, also known as pharmacometabonomics, is a field which stems from metabolomics, the quantification and analysis of metabolites produced by the body. It refers to the direct measurement of metabolites in an individual's bodily fluids, in order to predict or evaluate the metabolism of pharmaceutical compounds, and to better understand the pharmacokinetic profile of a drug. Pharmacometabolomics can be applied to measure metabolite levels following the administration of a drug, in order to monitor the effects of the drug on metabolic pathways. Pharmacomicrobiomics studies the effect of microbiome variations on drug disposition, action, and toxicity. Pharmacomicrobiomics is concerned with the interaction between drugs and the gut microbiome. Pharmacogenomics is the application of genomic technologies to drug discovery and further characterization of drugs related to an organism's entire genome. For pharmacology regarding individual genes, pharmacogenetics studies how genetic variation gives rise to differing responses to drugs. Pharmacoepigenetics studies the underlying epigenetic marking patterns that lead to variation in an individual's response to medical treatment.
Clinical practice and drug discovery
Pharmacology can be applied within clinical sciences. Clinical pharmacology is the application of pharmacological methods and principles in the study of drugs in humans. An example of this is posology, which is the study of dosage of medicines.
Pharmacology is closely related to toxicology. Both pharmacology and toxicology are scientific disciplines that focus on understanding the properties and actions of chemicals. However, pharmacology emphasizes the therapeutic effects of chemicals, usually drugs or compounds that could become drugs, whereas toxicology is the study of chemical's adverse effects and risk assessment.
Pharmacological knowledge is used to advise pharmacotherapy in medicine and pharmacy.
Drug discovery
Drug discovery is the field of study concerned with creating new drugs. It encompasses the subfields of drug design and development. Drug discovery starts with drug design, which is the inventive process of finding new drugs. In the most basic sense, this involves the design of molecules that are complementary in shape and charge to a given biomolecular target. After a lead compound has been identified through drug discovery, drug development involves bringing the drug to the market. Drug discovery is related to pharmacoeconomics, which is the sub-discipline of health economics that considers the value of drugs Pharmacoeconomics evaluates the cost and benefits of drugs in order to guide optimal healthcare resource allocation. The techniques used for the discovery, formulation, manufacturing and quality control of drugs discovery is studied by pharmaceutical engineering, a branch of engineering. Safety pharmacology specialises in detecting and investigating potential undesirable effects of drugs.
Development of medication is a vital concern to medicine, but also has strong economical and political implications. To protect the consumer and prevent abuse, many governments regulate the manufacture, sale, and administration of medication. In the United States, the main body that regulates pharmaceuticals is the Food and Drug Administration; they enforce standards set by the United States Pharmacopoeia. In the European Union, the main body that regulates pharmaceuticals is the EMA, and they enforce standards set by the European Pharmacopoeia.
The metabolic stability and the reactivity of a library of candidate drug compounds have to be assessed for drug metabolism and toxicological studies. Many methods have been proposed for quantitative predictions in drug metabolism; one example of a recent computational method is SPORCalc. A slight alteration to the chemical structure of a medicinal compound could alter its medicinal properties, depending on how the alteration relates to the structure of the substrate or receptor site on which it acts: this is called the structural activity relationship (SAR). When a useful activity has been identified, chemists will make many similar compounds called analogues, to try to maximize the desired medicinal effect(s). This can take anywhere from a few years to a decade or more, and is very expensive. One must also determine how safe the medicine is to consume, its stability in the human body and the best form for delivery to the desired organ system, such as tablet or aerosol. After extensive testing, which can take up to six years, the new medicine is ready for marketing and selling.
Because of these long timescales, and because out of every 5000 potential new medicines typically only one will ever reach the open market, this is an expensive way of doing things, often costing over 1 billion dollars. To recoup this outlay pharmaceutical companies may do a number of things:
Carefully research the demand for their potential new product before spending an outlay of company funds.
Obtain a patent on the new medicine preventing other companies from producing that medicine for a certain allocation of time.
The inverse benefit law describes the relationship between a drugs therapeutic benefits and its marketing.
When designing drugs, the placebo effect must be considered to assess the drug's true therapeutic value.
Drug development uses techniques from medicinal chemistry to chemically design drugs. This overlaps with the biological approach of finding targets and physiological effects.
Wider contexts
Pharmacology can be studied in relation to wider contexts than the physiology of individuals. For example, pharmacoepidemiology concerns the variations of the effects of drugs in or between populations, it is the bridge between clinical pharmacology and epidemiology. Pharmacoenvironmentology or environmental pharmacology is the study of the effects of used pharmaceuticals and personal care products (PPCPs) on the environment after their elimination from the body. Human health and ecology are intimately related so environmental pharmacology studies the environmental effect of drugs and pharmaceuticals and personal care products in the environment.
Drugs may also have ethnocultural importance, so ethnopharmacology studies the ethnic and cultural aspects of pharmacology.
Emerging fields
Photopharmacology is an emerging approach in medicine in which drugs are activated and deactivated with light. The energy of light is used to change for shape and chemical properties of the drug, resulting in different biological activity. This is done to ultimately achieve control when and where drugs are active in a reversible manner, to prevent side effects and pollution of drugs into the environment.
Theory of pharmacology
The study of chemicals requires intimate knowledge of the biological system affected. With the knowledge of cell biology and biochemistry increasing, the field of pharmacology has also changed substantially. It has become possible, through molecular analysis of receptors, to design chemicals that act on specific cellular signaling or metabolic pathways by affecting sites directly on cell-surface receptors (which modulate and mediate cellular signaling pathways controlling cellular function).
Chemicals can have pharmacologically relevant properties and effects. Pharmacokinetics describes the effect of the body on the chemical (e.g. half-life and volume of distribution), and pharmacodynamics describes the chemical's effect on the body (desired or toxic).
Systems, receptors and ligands
Pharmacology is typically studied with respect to particular systems, for example endogenous neurotransmitter systems. The major systems studied in pharmacology can be categorised by their ligands and include acetylcholine, adrenaline, glutamate, GABA, dopamine, histamine, serotonin, cannabinoid and opioid.
Molecular targets in pharmacology include receptors, enzymes and membrane transport proteins. Enzymes can be targeted with enzyme inhibitors. Receptors are typically categorised based on structure and function. Major receptor types studied in pharmacology include G protein coupled receptors, ligand gated ion channels and receptor tyrosine kinases.
Network pharmacology is a subfield of pharmacology that combines principles from pharmacology, systems biology, and network analysis to study the complex interactions between drugs and targets (e.g., receptors or enzymes etc.) in biological systems. The topology of a biochemical reaction network determines the shape of drug dose-response curve as well as the type of drug-drug interactions, thus can help designing efficient and safe therapeutic strategies. The topology Network pharmacology utilizes computational tools and network analysis algorithms to identify drug targets, predict drug-drug interactions, elucidate signaling pathways, and explore the polypharmacology of drugs.
Pharmacodynamics
Pharmacodynamics is defined as how the body reacts to the drugs. Pharmacodynamics theory often investigates the binding affinity of ligands to their receptors. Ligands can be agonists, partial agonists or antagonists at specific receptors in the body. Agonists bind to receptors and produce a biological response, a partial agonist produces a biological response lower than that of a full agonist, antagonists have affinity for a receptor but do not produce a biological response.
The ability of a ligand to produce a biological response is termed efficacy, in a dose-response profile it is indicated as percentage on the y-axis, where 100% is the maximal efficacy (all receptors are occupied).
Binding affinity is the ability of a ligand to form a ligand-receptor complex either through weak attractive forces (reversible) or covalent bond (irreversible), therefore efficacy is dependent on binding affinity.
Potency of drug is the measure of its effectiveness, EC50 is the drug concentration of a drug that produces an efficacy of 50% and the lower the concentration the higher the potency of the drug therefore EC50 can be used to compare potencies of drugs.
Medication is said to have a narrow or wide therapeutic index, certain safety factor or therapeutic window. This describes the ratio of desired effect to toxic effect. A compound with a narrow therapeutic index (close to one) exerts its desired effect at a dose close to its toxic dose. A compound with a wide therapeutic index (greater than five) exerts its desired effect at a dose substantially below its toxic dose. Those with a narrow margin are more difficult to dose and administer, and may require therapeutic drug monitoring (examples are warfarin, some antiepileptics, aminoglycoside antibiotics). Most anti-cancer drugs have a narrow therapeutic margin: toxic side-effects are almost always encountered at doses used to kill tumors.
The effect of drugs can be described with Loewe additivity which is one of several common reference models.
Other models include the Hill equation, Cheng-Prusoff equation and Schild regression.
Pharmacokinetics
Pharmacokinetics is the study of the bodily absorption, distribution, metabolism, and excretion of drugs.
When describing the pharmacokinetic properties of the chemical that is the active ingredient or active pharmaceutical ingredient (API), pharmacologists are often interested in L-ADME:
Liberation – How is the API disintegrated (for solid oral forms (breaking down into smaller particles), dispersed, or dissolved from the medication?
Absorption – How is the API absorbed (through the skin, the intestine, the oral mucosa)?
Distribution – How does the API spread through the organism?
Metabolism – Is the API converted chemically inside the body, and into which substances. Are these active (as well)? Could they be toxic?
Excretion – How is the API excreted (through the bile, urine, breath, skin)?
Drug metabolism is assessed in pharmacokinetics and is important in drug research and prescribing.
Pharmacokinetics is the movement of the drug in the body, it is usually described as 'what the body does to the drug' the physico-chemical properties of a drug will affect the rate and extent of absorption, extent of distribution, metabolism and elimination. The drug needs to have the appropriate molecular weight, polarity etc. in order to be absorbed, the fraction of a drug the reaches the systemic circulation is termed bioavailability, this is simply a ratio of the peak plasma drug levels after oral administration and the drug concentration after an IV administration(first pass effect is avoided and therefore no amount drug is lost). A drug must be lipophilic (lipid soluble) in order to pass through biological membranes this is true because biological membranes are made up of a lipid bilayer (phospholipids etc.) Once the drug reaches the blood circulation it is then distributed throughout the body and being more concentrated in highly perfused organs.
Administration, drug policy and safety
Drug policy
In the United States, the Food and Drug Administration (FDA) is responsible for creating guidelines for the approval and use of drugs. The FDA requires that all approved drugs fulfill two requirements:
The drug must be found to be effective against the disease for which it is seeking approval (where 'effective' means only that the drug performed better than placebo or competitors in at least two trials).
The drug must meet safety criteria by being subject to animal and controlled human testing.
Gaining FDA approval usually takes several years. Testing done on animals must be extensive and must include several species to help in the evaluation of both the effectiveness and toxicity of the drug. The dosage of any drug approved for use is intended to fall within a range in which the drug produces a therapeutic effect or desired outcome.
The safety and effectiveness of prescription drugs in the U.S. are regulated by the federal Prescription Drug Marketing Act of 1987.
The Medicines and Healthcare products Regulatory Agency (MHRA) has a similar role in the UK.
Medicare Part D is a prescription drug plan in the U.S.
The Prescription Drug Marketing Act (PDMA) is an act related to drug policy.
Prescription drugs are drugs regulated by legislation.
Societies and education
Societies and administration
The International Union of Basic and Clinical Pharmacology, Federation of European Pharmacological Societies and European Association for Clinical Pharmacology and Therapeutics are organisations representing standardisation and regulation of clinical and scientific pharmacology.
Systems for medical classification of drugs with pharmaceutical codes have been developed. These include the National Drug Code (NDC), administered by Food and Drug Administration.; Drug Identification Number (DIN), administered by Health Canada under the Food and Drugs Act; Hong Kong Drug Registration, administered by the Pharmaceutical Service of the Department of Health (Hong Kong) and National Pharmaceutical Product Index in South Africa. Hierarchical systems have also been developed, including the Anatomical Therapeutic Chemical Classification System (AT, or ATC/DDD), administered by World Health Organization; Generic Product Identifier (GPI), a hierarchical classification number published by MediSpan and SNOMED, C axis. Ingredients of drugs have been categorised by Unique Ingredient Identifier.
Education
The study of pharmacology overlaps with biomedical sciences and is the study of the effects of drugs on living organisms. Pharmacological research can lead to new drug discoveries, and promote a better understanding of human physiology. Students of pharmacology must have a detailed working knowledge of aspects in physiology, pathology, and chemistry. They may also require knowledge of plants as sources of pharmacologically active compounds. Modern pharmacology is interdisciplinary and involves biophysical and computational sciences, and analytical chemistry. A pharmacist needs to be well-equipped with knowledge on pharmacology for application in pharmaceutical research or pharmacy practice in hospitals or commercial organisations selling to customers. Pharmacologists, however, usually work in a laboratory undertaking research or development of new products. Pharmacological research is important in academic research (medical and non-medical), private industrial positions, science writing, scientific patents and law, consultation, biotech and pharmaceutical employment, the alcohol industry, food industry, forensics/law enforcement, public health, and environmental/ecological sciences. Pharmacology is often taught to pharmacy and medicine students as part of a Medical School curriculum.
See also
References
External links
American Society for Pharmacology and Experimental Therapeutics
British Pharmacological Society
International Conference on Harmonisation
US Pharmacopeia
International Union of Basic and Clinical Pharmacology
IUPHAR Committee on Receptor Nomenclature and Drug Classification
IUPHAR/BPS Guide to Pharmacology
Further reading
Biochemistry
Life sciences industry | 0.791274 | 0.997744 | 0.789489 |
Bioenergetics | Bioenergetics is a field in biochemistry and cell biology that concerns energy flow through living systems. This is an active area of biological research that includes the study of the transformation of energy in living organisms and the study of thousands of different cellular processes such as cellular respiration and the many other metabolic and enzymatic processes that lead to production and utilization of energy in forms such as adenosine triphosphate (ATP) molecules. That is, the goal of bioenergetics is to describe how living organisms acquire and transform energy in order to perform biological work. The study of metabolic pathways is thus essential to bioenergetics.
Overview
Bioenergetics is the part of biochemistry concerned with the energy involved in making and breaking of chemical bonds in the molecules found in biological organisms. It can also be defined as the study of energy relationships and energy transformations and transductions in living organisms. The ability to harness energy from a variety of metabolic pathways is a property of all living organisms. Growth, development, anabolism and catabolism are some of the central processes in the study of biological organisms, because the role of energy is fundamental to such biological processes. Life is dependent on energy transformations; living organisms survive because of exchange of energy between living tissues/ cells and the outside environment. Some organisms, such as autotrophs, can acquire energy from sunlight (through photosynthesis) without needing to consume nutrients and break them down. Other organisms, like heterotrophs, must intake nutrients from food to be able to sustain energy by breaking down chemical bonds in nutrients during metabolic processes such as glycolysis and the citric acid cycle. Importantly, as a direct consequence of the First Law of Thermodynamics, autotrophs and heterotrophs participate in a universal metabolic network—by eating autotrophs (plants), heterotrophs harness energy that was initially transformed by the plants during photosynthesis.
In a living organism, chemical bonds are broken and made as part of the exchange and transformation of energy. Energy is available for work (such as mechanical work) or for other processes (such as chemical synthesis and anabolic processes in growth), when weak bonds are broken and stronger bonds are made. The production of stronger bonds allows release of usable energy.
Adenosine triphosphate (ATP) is the main "energy currency" for organisms; the goal of metabolic and catabolic processes are to synthesize ATP from available starting materials (from the environment), and to break- down ATP (into adenosine diphosphate (ADP) and inorganic phosphate) by utilizing it in biological processes. In a cell, the ratio of ATP to ADP concentrations is known as the "energy charge" of the cell. A cell can use this energy charge to relay information about cellular needs; if there is more ATP than ADP available, the cell can use ATP to do work, but if there is more ADP than ATP available, the cell must synthesize ATP via oxidative phosphorylation.
Living organisms produce ATP from energy sources via oxidative phosphorylation. The terminal phosphate bonds of ATP are relatively weak compared with the stronger bonds formed when ATP is hydrolyzed (broken down by water) to adenosine diphosphate and inorganic phosphate. Here it is the thermodynamically favorable free energy of hydrolysis that results in energy release; the phosphoanhydride bond between the terminal phosphate group and the rest of the ATP molecule does not itself contain this energy. An organism's stockpile of ATP is used as a battery to store energy in cells. Utilization of chemical energy from such molecular bond rearrangement powers biological processes in every biological organism.
Living organisms obtain energy from organic and inorganic materials; i.e. ATP can be synthesized from a variety of biochemical precursors. For example, lithotrophs can oxidize minerals such as nitrates or forms of sulfur, such as elemental sulfur, sulfites, and hydrogen sulfide to produce ATP. In photosynthesis, autotrophs produce ATP using light energy, whereas heterotrophs must consume organic compounds, mostly including carbohydrates, fats, and proteins. The amount of energy actually obtained by the organism is lower than the amount present in the food; there are losses in digestion, metabolism, and thermogenesis.
Environmental materials that an organism intakes are generally combined with oxygen to release energy, although some nutrients can also be oxidized anaerobically by various organisms. The utilization of these materials is a form of slow combustion because the nutrients are reacted with oxygen (the materials are oxidized slowly enough that the organisms do not produce fire). The oxidation releases energy, which may evolve as heat or be used by the organism for other purposes, such as breaking chemical bonds.
Types of reactions
An exergonic reaction is a spontaneous chemical reaction that releases energy. It is thermodynamically favored, indexed by a negative value of ΔG (Gibbs free energy). Over the course of a reaction, energy needs to be put in, and this activation energy drives the reactants from a stable state to a highly energetically unstable transition state to a more stable state that is lower in energy (see: reaction coordinate). The reactants are usually complex molecules that are broken into simpler products. The entire reaction is usually catabolic. The release of energy (called Gibbs free energy) is negative (i.e. −ΔG) because energy is released from the reactants to the products.
An endergonic reaction is an anabolic chemical reaction that consumes energy. It is the opposite of an exergonic reaction. It has a positive ΔG because it takes more energy to break the bonds of the reactant than the energy of the products offer, i.e. the products have weaker bonds than the reactants. Thus, endergonic reactions are thermodynamically unfavorable. Additionally, endergonic reactions are usually anabolic.
The free energy (ΔG) gained or lost in a reaction can be calculated as follows: ΔG = ΔH − TΔS
where ∆G = Gibbs free energy, ∆H = enthalpy, T = temperature (in kelvins), and ∆S = entropy.
Examples of major bioenergetic processes
Glycolysis is the process of breaking down glucose into pyruvate, producing two molecules of ATP (per 1 molecule of glucose) in the process. When a cell has a higher concentration of ATP than ADP (i.e. has a high energy charge), the cell cannot undergo glycolysis, releasing energy from available glucose to perform biological work. Pyruvate is one product of glycolysis, and can be shuttled into other metabolic pathways (gluconeogenesis, etc.) as needed by the cell. Additionally, glycolysis produces reducing equivalents in the form of NADH (nicotinamide adenine dinucleotide), which will ultimately be used to donate electrons to the electron transport chain.
Gluconeogenesis is the opposite of glycolysis; when the cell's energy charge is low (the concentration of ADP is higher than that of ATP), the cell must synthesize glucose from carbon- containing biomolecules such as proteins, amino acids, fats, pyruvate, etc. For example, proteins can be broken down into amino acids, and these simpler carbon skeletons are used to build/ synthesize glucose.
The citric acid cycle is a process of cellular respiration in which acetyl coenzyme A, synthesized from pyruvate dehydrogenase, is first reacted with oxaloacetate to yield citrate. The remaining eight reactions produce other carbon-containing metabolites. These metabolites are successively oxidized, and the free energy of oxidation is conserved in the form of the reduced coenzymes FADH2 and NADH. These reduced electron carriers can then be re-oxidized when they transfer electrons to the electron transport chain.
Ketosis is a metabolic process where the body prioritizes ketone bodies, produced from fat, as its primary fuel source instead of glucose. This shift often occurs when glucose levels are low: during prolonged fasting, strenuous exercise, or specialized diets like ketogenic plans, the body may also adopt ketosis as an efficient alternative for energy production. This metabolic adaptation allows the body to conserve precious glucose for organs that depend on it, like the brain, while utilizing readily available fat stores for fuel.
Oxidative phosphorylation and the electron transport chain is the process where reducing equivalents such as NADPH, FADH2 and NADH can be used to donate electrons to a series of redox reactions that take place in electron transport chain complexes. These redox reactions take place in enzyme complexes situated within the mitochondrial membrane. These redox reactions transfer electrons "down" the electron transport chain, which is coupled to the proton motive force. This difference in proton concentration between the mitochondrial matrix and inner membrane space is used to drive ATP synthesis via ATP synthase.
Photosynthesis, another major bioenergetic process, is the metabolic pathway used by plants in which solar energy is used to synthesize glucose from carbon dioxide and water. This reaction takes place in the chloroplast. After glucose is synthesized, the plant cell can undergo photophosphorylation to produce ATP.
Additional information
During energy transformations in living systems, order and organization must be compensated by releasing energy which will increase entropy of the surrounding.
Organisms are open systems that exchange materials and energy with the environment. They are never at equilibrium with the surrounding.
Energy is spent to create and maintain order in the cells, and surplus energy and other simpler by-products are released to create disorder such that there is an increase in entropy of the surrounding.
In a reversible process, entropy remains constant where as in an irreversible process (more common to real-world scenarios), entropy tends to increase.
During phase changes (from solid to liquid, or to gas), entropy increases because the number of possible arrangements of particles increases.
If ∆G<0, the chemical reaction is spontaneous and favourable in that direction.
If ∆G=0, the reactants and products of chemical reaction are at equilibrium.
If ∆G>0, the chemical reaction is non-spontaneous and unfavorable in that direction.
∆G is not an indicator for velocity or rate of chemical reaction at which equilibrium is reached. It depends on amount of enzyme and energy activation.
Reaction coupling
Is the linkage of chemical reactions in a way that the product of one reaction becomes the substrate of another reaction.
This allows organisms to utilize energy and resources efficiently. For example, in cellular respiration, energy released by the breakdown of glucose is coupled in the synthesis of ATP.
Cotransport
In August 1960, Robert K. Crane presented for the first time his discovery of the sodium-glucose cotransport as the mechanism for intestinal glucose absorption. Crane's discovery of cotransport was the first ever proposal of flux coupling in biology and was the most important event concerning carbohydrate absorption in the 20th century.
Chemiosmotic theory
One of the major triumphs of bioenergetics is Peter D. Mitchell's chemiosmotic theory of how protons in aqueous solution function in the production of ATP in cell organelles such as mitochondria. This work earned Mitchell the 1978 Nobel Prize for Chemistry. Other cellular sources of ATP such as glycolysis were understood first, but such processes for direct coupling of enzyme activity to ATP production are not the major source of useful chemical energy in most cells. Chemiosmotic coupling is the major energy producing process in most cells, being utilized in chloroplasts and several single celled organisms in addition to mitochondria.
Binding Change Mechanism
The binding change mechanism, proposed by Paul Boyer and John E. Walker, who were awarded the Nobel Prize in Chemistry in 1997, suggests that ATP synthesis is linked to a conformational change in ATP synthase. This change is triggered by the rotation of the gamma subunit. ATP synthesis can be achieved through several mechanisms. The first mechanism postulates that the free energy of the proton gradient is utilized to alter the conformation of polypeptide molecules in the ATP synthesis active centers. The second mechanism suggests that the change in the conformational state is also produced by the transformation of mechanical energy into chemical energy using biological mechanoemission.
Energy balance
Energy homeostasis is the homeostatic control of energy balance – the difference between energy obtained through food consumption and energy expenditure – in living systems.
See also
Bioenergetic systems
Cellular respiration
Photosynthesis
ATP synthase
Active transport
Myosin
Exercise physiology
Table of standard Gibbs free energies
References
Further reading
Juretic, D., 2021. Bioenergetics: a bridge across life and universe. CRC Press.
External links
The Molecular & Cellular Bioenergetics Gordon Research Conference (see).
American Society of Exercise Physiologists
Biochemistry
Biophysics
Cell biology
Energy (physics) | 0.797211 | 0.990302 | 0.78948 |
Diffusion | Diffusion is the net movement of anything (for example, atoms, ions, molecules, energy) generally from a region of higher concentration to a region of lower concentration. Diffusion is driven by a gradient in Gibbs free energy or chemical potential. It is possible to diffuse "uphill" from a region of lower concentration to a region of higher concentration, as in spinodal decomposition. Diffusion is a stochastic process due to the inherent randomness of the diffusing entity and can be used to model many real-life stochastic scenarios. Therefore, diffusion and the corresponding mathematical models are used in several fields beyond physics, such as statistics, probability theory, information theory, neural networks, finance, and marketing.
The concept of diffusion is widely used in many fields, including physics (particle diffusion), chemistry, biology, sociology, economics, statistics, data science, and finance (diffusion of people, ideas, data and price values). The central idea of diffusion, however, is common to all of these: a substance or collection undergoing diffusion spreads out from a point or location at which there is a higher concentration of that substance or collection.
A gradient is the change in the value of a quantity; for example, concentration, pressure, or temperature with the change in another variable, usually distance. A change in concentration over a distance is called a concentration gradient, a change in pressure over a distance is called a pressure gradient, and a change in temperature over a distance is called a temperature gradient.
The word diffusion derives from the Latin word, diffundere, which means "to spread out".
A distinguishing feature of diffusion is that it depends on particle random walk, and results in mixing or mass transport without requiring directed bulk motion. Bulk motion, or bulk flow, is the characteristic of advection. The term convection is used to describe the combination of both transport phenomena.
If a diffusion process can be described by Fick's laws, it is called a normal diffusion (or Fickian diffusion); Otherwise, it is called an anomalous diffusion (or non-Fickian diffusion).
When talking about the extent of diffusion, two length scales are used in two different scenarios:
Brownian motion of an impulsive point source (for example, one single spray of perfume)—the square root of the mean squared displacement from this point. In Fickian diffusion, this is , where is the dimension of this Brownian motion;
Constant concentration source in one dimension—the diffusion length. In Fickian diffusion, this is .
Diffusion vs. bulk flow
"Bulk flow" is the movement/flow of an entire body due to a pressure gradient (for example, water coming out of a tap). "Diffusion" is the gradual movement/dispersion of concentration within a body with no net movement of matter. An example of a process where both bulk motion and diffusion occur is human breathing.
First, there is a "bulk flow" process. The lungs are located in the thoracic cavity, which expands as the first step in external respiration. This expansion leads to an increase in volume of the alveoli in the lungs, which causes a decrease in pressure in the alveoli. This creates a pressure gradient between the air outside the body at relatively high pressure and the alveoli at relatively low pressure. The air moves down the pressure gradient through the airways of the lungs and into the alveoli until the pressure of the air and that in the alveoli are equal, that is, the movement of air by bulk flow stops once there is no longer a pressure gradient.
Second, there is a "diffusion" process. The air arriving in the alveoli has a higher concentration of oxygen than the "stale" air in the alveoli. The increase in oxygen concentration creates a concentration gradient for oxygen between the air in the alveoli and the blood in the capillaries that surround the alveoli. Oxygen then moves by diffusion, down the concentration gradient, into the blood. The other consequence of the air arriving in alveoli is that the concentration of carbon dioxide in the alveoli decreases. This creates a concentration gradient for carbon dioxide to diffuse from the blood into the alveoli, as fresh air has a very low concentration of carbon dioxide compared to the blood in the body.
Third, there is another "bulk flow" process. The pumping action of the heart then transports the blood around the body. As the left ventricle of the heart contracts, the volume decreases, which increases the pressure in the ventricle. This creates a pressure gradient between the heart and the capillaries, and blood moves through blood vessels by bulk flow down the pressure gradient.
Diffusion in the context of different disciplines
There are two ways to introduce the notion of diffusion: either a phenomenological approach starting with Fick's laws of diffusion and their mathematical consequences, or a physical and atomistic one, by considering the random walk of the diffusing particles.
In the phenomenological approach, diffusion is the movement of a substance from a region of high concentration to a region of low concentration without bulk motion. According to Fick's laws, the diffusion flux is proportional to the negative gradient of concentrations. It goes from regions of higher concentration to regions of lower concentration. Sometime later, various generalizations of Fick's laws were developed in the frame of thermodynamics and non-equilibrium thermodynamics.
From the atomistic point of view, diffusion is considered as a result of the random walk of the diffusing particles. In molecular diffusion, the moving molecules in a gas, liquid, or solid are self-propelled by kinetic energy. Random walk of small particles in suspension in a fluid was discovered in 1827 by Robert Brown, who found that minute particle suspended in a liquid medium and just large enough to be visible under an optical microscope exhibit a rapid and continually irregular motion of particles known as Brownian movement. The theory of the Brownian motion and the atomistic backgrounds of diffusion were developed by Albert Einstein.
The concept of diffusion is typically applied to any subject matter involving random walks in ensembles of individuals.
In chemistry and materials science, diffusion also refers to the movement of fluid molecules in porous solids. Different types of diffusion are distinguished in porous solids. Molecular diffusion occurs when the collision with another molecule is more likely than the collision with the pore walls. Under such conditions, the diffusivity is similar to that in a non-confined space and is proportional to the mean free path. Knudsen diffusion occurs when the pore diameter is comparable to or smaller than the mean free path of the molecule diffusing through the pore. Under this condition, the collision with the pore walls becomes gradually more likely and the diffusivity is lower. Finally there is configurational diffusion, which happens if the molecules have comparable size to that of the pore. Under this condition, the diffusivity is much lower compared to molecular diffusion and small differences in the kinetic diameter of the molecule cause large differences in diffusivity.
Biologists often use the terms "net movement" or "net diffusion" to describe the movement of ions or molecules by diffusion. For example, oxygen can diffuse through cell membranes so long as there is a higher concentration of oxygen outside the cell. However, because the movement of molecules is random, occasionally oxygen molecules move out of the cell (against the concentration gradient). Because there are more oxygen molecules outside the cell, the probability that oxygen molecules will enter the cell is higher than the probability that oxygen molecules will leave the cell. Therefore, the "net" movement of oxygen molecules (the difference between the number of molecules either entering or leaving the cell) is into the cell. In other words, there is a net movement of oxygen molecules down the concentration gradient.
History of diffusion in physics
In the scope of time, diffusion in solids was used long before the theory of diffusion was created. For example, Pliny the Elder had previously described the cementation process, which produces steel from the element iron (Fe) through carbon diffusion. Another example is well known for many centuries, the diffusion of colors of stained glass or earthenware and Chinese ceramics.
In modern science, the first systematic experimental study of diffusion was performed by Thomas Graham. He studied diffusion in gases, and the main phenomenon was described by him in 1831–1833:
"...gases of different nature, when brought into contact, do not arrange themselves according to their density, the heaviest undermost, and the lighter uppermost, but they spontaneously diffuse, mutually and equally, through each other, and so remain in the intimate state of mixture for any length of time."
The measurements of Graham contributed to James Clerk Maxwell deriving, in 1867, the coefficient of diffusion for CO2 in the air. The error rate is less than 5%.
In 1855, Adolf Fick, the 26-year-old anatomy demonstrator from Zürich, proposed his law of diffusion. He used Graham's research, stating his goal as "the development of a fundamental law, for the operation of diffusion in a single element of space". He asserted a deep analogy between diffusion and conduction of heat or electricity, creating a formalism similar to Fourier's law for heat conduction (1822) and Ohm's law for electric current (1827).
Robert Boyle demonstrated diffusion in solids in the 17th century by penetration of zinc into a copper coin. Nevertheless, diffusion in solids was not systematically studied until the second part of the 19th century. William Chandler Roberts-Austen, the well-known British metallurgist and former assistant of Thomas Graham studied systematically solid state diffusion on the example of gold in lead in 1896. :
"... My long connection with Graham's researches made it almost a duty to attempt to extend his work on liquid diffusion to metals."
In 1858, Rudolf Clausius introduced the concept of the mean free path. In the same year, James Clerk Maxwell developed the first atomistic theory of transport processes in gases. The modern atomistic theory of diffusion and Brownian motion was developed by Albert Einstein, Marian Smoluchowski and Jean-Baptiste Perrin. Ludwig Boltzmann, in the development of the atomistic backgrounds of the macroscopic transport processes, introduced the Boltzmann equation, which has served mathematics and physics with a source of transport process ideas and concerns for more than 140 years.
In 1920–1921, George de Hevesy measured self-diffusion using radioisotopes. He studied self-diffusion of radioactive isotopes of lead in the liquid and solid lead.
Yakov Frenkel (sometimes, Jakov/Jacob Frenkel) proposed, and elaborated in 1926, the idea of diffusion in crystals through local defects (vacancies and interstitial atoms). He concluded, the diffusion process in condensed matter is an ensemble of elementary jumps and quasichemical interactions of particles and defects. He introduced several mechanisms of diffusion and found rate constants from experimental data.
Sometime later, Carl Wagner and Walter H. Schottky developed Frenkel's ideas about mechanisms of diffusion further. Presently, it is universally recognized that atomic defects are necessary to mediate diffusion in crystals.
Henry Eyring, with co-authors, applied his theory of absolute reaction rates to Frenkel's quasichemical model of diffusion. The analogy between reaction kinetics and diffusion leads to various nonlinear versions of Fick's law.
Basic models of diffusion
Definition of diffusion flux
Each model of diffusion expresses the diffusion flux with the use of concentrations, densities and their derivatives. Flux is a vector representing the quantity and direction of transfer. Given a small area with normal , the transfer of a physical quantity through the area per time is
where is the inner product and is the little-o notation. If we use the notation of vector area then
The dimension of the diffusion flux is [flux] = [quantity]/([time]·[area]). The diffusing physical quantity may be the number of particles, mass, energy, electric charge, or any other scalar extensive quantity. For its density, , the diffusion equation has the form
where is intensity of any local source of this quantity (for example, the rate of a chemical reaction).
For the diffusion equation, the no-flux boundary conditions can be formulated as on the boundary, where is the normal to the boundary at point .
Normal single component concentration gradient
Fick's first law: The diffusion flux, , is proportional to the negative gradient of spatial concentration, :
where D is the diffusion coefficient. The corresponding diffusion equation (Fick's second law) is
In case the diffusion coefficient is independent of , Fick's second law can be simplified to
where is the Laplace operator,
Multicomponent diffusion and thermodiffusion
Fick's law describes diffusion of an admixture in a medium. The concentration of this admixture should be small and the gradient of this concentration should be also small. The driving force of diffusion in Fick's law is the antigradient of concentration, .
In 1931, Lars Onsager included the multicomponent transport processes in the general context of linear non-equilibrium thermodynamics. For
multi-component transport,
where is the flux of the th physical quantity (component), is the th thermodynamic force and is Onsager's matrix of kinetic transport coefficients.
The thermodynamic forces for the transport processes were introduced by Onsager as the space gradients of the derivatives of the entropy density (he used the term "force" in quotation marks or "driving force"):
where are the "thermodynamic coordinates".
For the heat and mass transfer one can take (the density of internal energy) and is the concentration of the th component. The corresponding driving forces are the space vectors
because
where T is the absolute temperature and is the chemical potential of the th component. It should be stressed that the separate diffusion equations describe the mixing or mass transport without bulk motion. Therefore, the terms with variation of the total pressure are neglected. It is possible for diffusion of small admixtures and for small gradients.
For the linear Onsager equations, we must take the thermodynamic forces in the linear approximation near equilibrium:
where the derivatives of are calculated at equilibrium .
The matrix of the kinetic coefficients should be symmetric (Onsager reciprocal relations) and positive definite (for the entropy growth).
The transport equations are
Here, all the indexes are related to the internal energy (0) and various components. The expression in the square brackets is the matrix of the diffusion (i,k > 0), thermodiffusion (i > 0, k = 0 or k > 0, i = 0) and thermal conductivity coefficients.
Under isothermal conditions T = constant. The relevant thermodynamic potential is the free energy (or the free entropy). The thermodynamic driving forces for the isothermal diffusion are antigradients of chemical potentials, , and the matrix of diffusion coefficients is
(i,k > 0).
There is intrinsic arbitrariness in the definition of the thermodynamic forces and kinetic coefficients because they are not measurable separately and only their combinations can be measured. For example, in the original work of Onsager the thermodynamic forces include additional multiplier T, whereas in the Course of Theoretical Physics this multiplier is omitted but the sign of the thermodynamic forces is opposite. All these changes are supplemented by the corresponding changes in the coefficients and do not affect the measurable quantities.
Nondiagonal diffusion must be nonlinear
The formalism of linear irreversible thermodynamics (Onsager) generates the systems of linear diffusion equations in the form
If the matrix of diffusion coefficients is diagonal, then this system of equations is just a collection of decoupled Fick's equations for various components. Assume that diffusion is non-diagonal, for example, , and consider the state with . At this state, . If at some points, then becomes negative at these points in a short time. Therefore, linear non-diagonal diffusion does not preserve positivity of concentrations. Non-diagonal equations of multicomponent diffusion must be non-linear.
Applied forces
The Einstein relation (kinetic theory) connects the diffusion coefficient and the mobility (the ratio of the particle's terminal drift velocity to an applied force). For charged particles:
where D is the diffusion constant, μ is the "mobility", kB is the Boltzmann constant, T is the absolute temperature, and q is the elementary charge, that is, the charge of one electron.
Below, to combine in the same formula the chemical potential μ and the mobility, we use for mobility the notation .
Diffusion across a membrane
The mobility-based approach was further applied by T. Teorell. In 1935, he studied the diffusion of ions through a membrane. He formulated the essence of his approach in the formula:
the flux is equal to mobility × concentration × force per gram-ion.
This is the so-called Teorell formula. The term "gram-ion" ("gram-particle") is used for a quantity of a substance that contains the Avogadro number of ions (particles). The common modern term is mole.
The force under isothermal conditions consists of two parts:
Diffusion force caused by concentration gradient: .
Electrostatic force caused by electric potential gradient: .
Here R is the gas constant, T is the absolute temperature, n is the concentration, the equilibrium concentration is marked by a superscript "eq", q is the charge and φ is the electric potential.
The simple but crucial difference between the Teorell formula and the Onsager laws is the concentration factor in the Teorell expression for the flux. In the Einstein–Teorell approach, if for the finite force the concentration tends to zero then the flux also tends to zero, whereas the Onsager equations violate this simple and physically obvious rule.
The general formulation of the Teorell formula for non-perfect systems under isothermal conditions is
where μ is the chemical potential, μ0 is the standard value of the chemical potential.
The expression is the so-called activity. It measures the "effective concentration" of a species in a non-ideal mixture. In this notation, the Teorell formula for the flux has a very simple form
The standard derivation of the activity includes a normalization factor and for small concentrations , where is the standard concentration. Therefore, this formula for the flux describes the flux of the normalized dimensionless quantity :
Ballistic time scale
The Einstein model neglects the inertia of the diffusing partial. The alternative
Langevin equation starts with Newton's second law of motion:
where
x is the position.
μ is the mobility of the particle in the fluid or gas, which can be calculated using the Einstein relation (kinetic theory).
m is the mass of the particle.
F is the random force applied to the particle.
t is time.
Solving this equation, one obtained the time-dependent diffusion constant in the long-time limit and when the particle is significantly denser than the surrounding fluid,
where
kB is the Boltzmann constant;
T is the absolute temperature.
μ is the mobility of the particle in the fluid or gas, which can be calculated using the Einstein relation (kinetic theory).
m is the mass of the particle.
t is time.
At long time scales, Einstein's result is recovered, but short time scales, the ballistic regime are also explained. Moreover, unlike the Einstein approach, a velocity can be defined, leading to the Fluctuation-dissipation theorem, connecting the competition between friction and random forces in defining the temperature.
Jumps on the surface and in solids
Diffusion of reagents on the surface of a catalyst may play an important role in heterogeneous catalysis. The model of diffusion in the ideal monolayer is based on the jumps of the reagents on the nearest free places. This model was used for CO on Pt oxidation under low gas pressure.
The system includes several reagents on the surface. Their surface concentrations are The surface is a lattice of the adsorption places. Each
reagent molecule fills a place on the surface. Some of the places are free. The concentration of the free places is . The sum of all (including free places) is constant, the density of adsorption places b.
The jump model gives for the diffusion flux of (i = 1, ..., n):
The corresponding diffusion equation is:
Due to the conservation law, and we
have the system of m diffusion equations. For one component we get Fick's law and linear equations because . For two and more components the equations are nonlinear.
If all particles can exchange their positions with their closest neighbours then a simple generalization gives
where is a symmetric matrix of coefficients that characterize the intensities of jumps. The free places (vacancies) should be considered as special "particles" with concentration .
Various versions of these jump models are also suitable for simple diffusion mechanisms in solids.
Porous media
For diffusion in porous media the basic equations are (if Φ is constant):
where D is the diffusion coefficient, Φ is porosity, n is the concentration, m > 0 (usually m > 1, the case m = 1 corresponds to Fick's law).
Care must be taken to properly account for the porosity (Φ) of the porous medium in both the flux terms and the accumulation terms. For example, as the porosity goes to zero, the molar flux in the porous medium goes to zero for a given concentration gradient. Upon applying the divergence of the flux, the porosity terms cancel out and the second equation above is formed.
For diffusion of gases in porous media this equation is the formalization of Darcy's law: the volumetric flux of a gas in the porous media is
where k is the permeability of the medium, μ is the viscosity and p is the pressure.
The advective molar flux is given as
J = nq
and for Darcy's law gives the equation of diffusion in porous media with m = γ + 1.
In porous media, the average linear velocity (ν), is related to the volumetric flux as:
Combining the advective molar flux with the diffusive flux gives the advection dispersion equation
For underground water infiltration, the Boussinesq approximation gives the same equation with m = 2.
For plasma with the high level of radiation, the Zeldovich–Raizer equation gives m > 4 for the heat transfer.
Diffusion in physics
Diffusion coefficient in kinetic theory of gases
The diffusion coefficient is the coefficient in the Fick's first law , where J is the diffusion flux (amount of substance) per unit area per unit time, n (for ideal mixtures) is the concentration, x is the position [length].
Consider two gases with molecules of the same diameter d and mass m (self-diffusion). In this case, the elementary mean free path theory of diffusion gives for the diffusion coefficient
where kB is the Boltzmann constant, T is the temperature, P is the pressure, is the mean free path, and vT is the mean thermal speed:
We can see that the diffusion coefficient in the mean free path approximation grows with T as T3/2 and decreases with P as 1/P. If we use for P the ideal gas law P = RnT with the total concentration n, then we can see that for given concentration n the diffusion coefficient grows with T as T1/2 and for given temperature it decreases with the total concentration as 1/n.
For two different gases, A and B, with molecular masses mA, mB and molecular diameters dA, dB, the mean free path estimate of the diffusion coefficient of A in B and B in A is:
The theory of diffusion in gases based on Boltzmann's equation
In Boltzmann's kinetics of the mixture of gases, each gas has its own distribution function, , where t is the time moment, x is position and c is velocity of molecule of the ith component of the mixture. Each component has its mean velocity . If the velocities do not coincide then there exists diffusion.
In the Chapman–Enskog approximation, all the distribution functions are expressed through the densities of the conserved quantities:
individual concentrations of particles, (particles per volume),
density of momentum (mi is the ith particle mass),
density of kinetic energy
The kinetic temperature T and pressure P are defined in 3D space as
where is the total density.
For two gases, the difference between velocities, is given by the expression:
where is the force applied to the molecules of the ith component and is the thermodiffusion ratio.
The coefficient D12 is positive. This is the diffusion coefficient. Four terms in the formula for C1−C2 describe four main effects in the diffusion of gases:
describes the flux of the first component from the areas with the high ratio n1/n to the areas with lower values of this ratio (and, analogously the flux of the second component from high n2/n to low n2/n because n2/n = 1 – n1/n);
describes the flux of the heavier molecules to the areas with higher pressure and the lighter molecules to the areas with lower pressure, this is barodiffusion;
describes diffusion caused by the difference of the forces applied to molecules of different types. For example, in the Earth's gravitational field, the heavier molecules should go down, or in electric field the charged molecules should move, until this effect is not equilibrated by the sum of other terms. This effect should not be confused with barodiffusion caused by the pressure gradient.
describes thermodiffusion, the diffusion flux caused by the temperature gradient.
All these effects are called diffusion because they describe the differences between velocities of different components in the mixture. Therefore, these effects cannot be described as a bulk transport and differ from advection or convection.
In the first approximation,
for rigid spheres;
for repulsing force
The number is defined by quadratures (formulas (3.7), (3.9), Ch. 10 of the classical Chapman and Cowling book)
We can see that the dependence on T for the rigid spheres is the same as for the simple mean free path theory but for the power repulsion laws the exponent is different. Dependence on a total concentration n for a given temperature has always the same character, 1/n.
In applications to gas dynamics, the diffusion flux and the bulk flow should be joined in one system of transport equations. The bulk flow describes the mass transfer. Its velocity V is the mass average velocity. It is defined through the momentum density and the mass concentrations:
where is the mass concentration of the ith species, is the mass density.
By definition, the diffusion velocity of the ith component is , .
The mass transfer of the ith component is described by the continuity equation
where is the net mass production rate in chemical reactions, .
In these equations, the term describes advection of the ith component and the term represents diffusion of this component.
In 1948, Wendell H. Furry proposed to use the form of the diffusion rates found in kinetic theory as a framework for the new phenomenological approach to diffusion in gases. This approach was developed further by F.A. Williams and S.H. Lam. For the diffusion velocities in multicomponent gases (N components) they used
Here, is the diffusion coefficient matrix, is the thermal diffusion coefficient, is the body force per unit mass acting on the ith species, is the partial pressure fraction of the ith species (and is the partial pressure), is the mass fraction of the ith species, and
Diffusion of electrons in solids
When the density of electrons in solids is not in equilibrium, diffusion of electrons occurs. For example, when a bias is applied to two ends of a chunk of semiconductor, or a light shines on one end (see right figure), electrons diffuse from high density regions (center) to low density regions (two ends), forming a gradient of electron density. This process generates current, referred to as diffusion current.
Diffusion current can also be described by Fick's first law
where J is the diffusion current density (amount of substance) per unit area per unit time, n (for ideal mixtures) is the electron density, x is the position [length].
Diffusion in geophysics
Analytical and numerical models that solve the diffusion equation for different initial and boundary conditions have been popular for studying a wide variety of changes to the Earth's surface. Diffusion has been used extensively in erosion studies of hillslope retreat, bluff erosion, fault scarp degradation, wave-cut terrace/shoreline retreat, alluvial channel incision, coastal shelf retreat, and delta progradation. Although the Earth's surface is not literally diffusing in many of these cases, the process of diffusion effectively mimics the holistic changes that occur over decades to millennia. Diffusion models may also be used to solve inverse boundary value problems in which some information about the depositional environment is known from paleoenvironmental reconstruction and the diffusion equation is used to figure out the sediment influx and time series of landform changes.
Dialysis
Dialysis works on the principles of the diffusion of solutes and ultrafiltration of fluid across a semi-permeable membrane. Diffusion is a property of substances in water; substances in water tend to move from an area of high concentration to an area of low concentration. Blood flows by one side of a semi-permeable membrane, and a dialysate, or special dialysis fluid, flows by the opposite side. A semipermeable membrane is a thin layer of material that contains holes of various sizes, or pores. Smaller solutes and fluid pass through the membrane, but the membrane blocks the passage of larger substances (for example, red blood cells and large proteins). This replicates the filtering process that takes place in the kidneys when the blood enters the kidneys and the larger substances are separated from the smaller ones in the glomerulus.
Random walk (random motion)
One common misconception is that individual atoms, ions or molecules move randomly, which they do not. In the animation on the right, the ion in the left panel appears to have "random" motion in the absence of other ions. As the right panel shows, however, this motion is not random but is the result of "collisions" with other ions. As such, the movement of a single atom, ion, or molecule within a mixture just appears random when viewed in isolation. The movement of a substance within a mixture by "random walk" is governed by the kinetic energy within the system that can be affected by changes in concentration, pressure or temperature. (This is a classical description. At smaller scales, quantum effects will be non-negligible, in general. Thus, the study of the movement of a single atom becomes more subtle since particles at such small scales are described by probability amplitudes rather than deterministic measures of position and velocity.)
Separation of diffusion from convection in gases
While Brownian motion of multi-molecular mesoscopic particles (like pollen grains studied by Brown) is observable under an optical microscope, molecular diffusion can only be probed in carefully controlled experimental conditions. Since Graham experiments, it is well known that avoiding of convection is necessary and this may be a non-trivial task.
Under normal conditions, molecular diffusion dominates only at lengths in the nanometre-to-millimetre range. On larger length scales, transport in liquids and gases is normally due to another transport phenomenon, convection. To separate diffusion in these cases, special efforts are needed.
In contrast, heat conduction through solid media is an everyday occurrence (for example, a metal spoon partly immersed in a hot liquid). This explains why the diffusion of heat was explained mathematically before the diffusion of mass.
Other types of diffusion
Anisotropic diffusion, also known as the Perona–Malik equation, enhances high gradients
Atomic diffusion, in solids
Bohm diffusion, spread of plasma across magnetic fields
Eddy diffusion, in coarse-grained description of turbulent flow
Effusion of a gas through small holes
Electronic diffusion, resulting in an electric current called the diffusion current
Facilitated diffusion, present in some organisms
Gaseous diffusion, used for isotope separation
Heat equation, diffusion of thermal energy
Itō diffusion, mathematisation of Brownian motion, continuous stochastic process.
Knudsen diffusion of gas in long pores with frequent wall collisions
Lévy flight
Molecular diffusion, diffusion of molecules from more dense to less dense areas
Momentum diffusion ex. the diffusion of the hydrodynamic velocity field
Photon diffusion
Plasma diffusion
Random walk, model for diffusion
Reverse diffusion, against the concentration gradient, in phase separation
Rotational diffusion, random reorientation of molecules
Spin diffusion, diffusion of spin magnetic moments in solids
Surface diffusion, diffusion of adparticles on a surface
Taxis is an animal's directional movement activity in response to a stimulus
Kinesis is an animal's non-directional movement activity in response to a stimulus
Trans-cultural diffusion, diffusion of cultural traits across geographical area
Turbulent diffusion, transport of mass, heat, or momentum within a turbulent fluid
See also
References
Articles containing video clips
Broad-concept articles | 0.790888 | 0.998106 | 0.78939 |
Chemometrics | Chemometrics is the science of extracting information from chemical systems by data-driven means. Chemometrics is inherently interdisciplinary, using methods frequently employed in core data-analytic disciplines such as multivariate statistics, applied mathematics, and computer science, in order to address problems in chemistry, biochemistry, medicine, biology and chemical engineering. In this way, it mirrors other interdisciplinary fields, such as psychometrics and econometrics.
Background
Chemometrics is applied to solve both descriptive and predictive problems in experimental natural sciences, especially in chemistry. In descriptive applications, properties of chemical systems are modeled with the intent of learning the underlying relationships and structure of the system (i.e., model understanding and identification). In predictive applications, properties of chemical systems are modeled with the intent of predicting new properties or behavior of interest. In both cases, the datasets can be small but are often large and complex, involving hundreds to thousands of variables, and hundreds to thousands of cases or observations.
Chemometric techniques are particularly heavily used in analytical chemistry and metabolomics, and the development of improved chemometric methods of analysis also continues to advance the state of the art in analytical instrumentation and methodology. It is an application-driven discipline, and thus while the standard chemometric methodologies are very widely used industrially, academic groups are dedicated to the continued development of chemometric theory, method and application development.
Origins
Although one could argue that even the earliest analytical experiments in chemistry involved a form of chemometrics, the field is generally recognized to have emerged in the 1970s as computers became increasingly exploited for scientific investigation. The term 'chemometrics' was coined by Svante Wold in a 1971 grant application, and the International Chemometrics Society was formed shortly thereafter by Svante Wold and Bruce Kowalski, two pioneers in the field. Wold was a professor of organic chemistry at Umeå University, Sweden, and Kowalski was a professor of analytical chemistry at University of Washington, Seattle.
Many early applications involved multivariate classification, numerous quantitative predictive applications followed, and by the late 1970s and early 1980s a wide variety of data- and computer-driven chemical analyses were occurring.
Multivariate analysis was a critical facet even in the earliest applications of chemometrics. Data from infrared and UV/visible spectroscopy are often counted in thousands of measurements per sample. Mass spectrometry, nuclear magnetic resonance, atomic emission/absorption and chromatography experiments are also all by nature highly multivariate. The structure of these data was found to be conducive to using techniques such as principal components analysis (PCA), partial least-squares (PLS), orthogonal partial least-squares (OPLS), and two-way orthogonal partial least squares (O2PLS). This is primarily because, while the datasets may be highly multivariate there is strong and often linear low-rank structure present. PCA and PLS have been shown over time very effective at empirically modeling the more chemically interesting low-rank structure, exploiting the interrelationships or 'latent variables' in the data, and providing alternative compact coordinate systems for further numerical analysis such as regression, clustering, and pattern recognition. Partial least squares in particular was heavily used in chemometric applications for many years before it began to find regular use in other fields.
Through the 1980s three dedicated journals appeared in the field: Journal of Chemometrics, Chemometrics and Intelligent Laboratory Systems, and Journal of Chemical Information and Modeling. These journals continue to cover both fundamental and methodological research in chemometrics. At present, most routine applications of existing chemometric methods are commonly published in application-oriented journals (e.g., Applied Spectroscopy, Analytical Chemistry, Analytica Chimica Acta, Talanta). Several important books/monographs on chemometrics were also first published in the 1980s, including the first edition of Malinowski's Factor Analysis in Chemistry, Sharaf, Illman and Kowalski's Chemometrics, Massart et al. Chemometrics: a textbook, and Multivariate Calibration by Martens and Naes.
Some large chemometric application areas have gone on to represent new domains, such as molecular modeling and QSAR, cheminformatics, the '-omics' fields of genomics, proteomics, metabonomics and metabolomics, process modeling and process analytical technology.
An account of the early history of chemometrics was published as a series of interviews by Geladi and Esbensen.
Techniques
Multivariate calibration
Many chemical problems and applications of chemometrics involve calibration. The objective is to develop models which can be used to predict properties of interest based on measured properties of the chemical system, such as pressure, flow, temperature, infrared, Raman, NMR spectra and mass spectra. Examples include the development of multivariate models relating 1) multi-wavelength spectral response to analyte concentration, 2) molecular descriptors to biological activity, 3) multivariate process conditions/states to final product attributes. The process requires a calibration or training data set, which includes reference values for the properties of interest for prediction, and the measured attributes believed to correspond to these properties. For case 1), for example, one can assemble data from a number of samples, including concentrations for an analyte of interest for each sample (the reference) and the corresponding infrared spectrum of that sample. Multivariate calibration techniques such as partial-least squares regression, or principal component regression (and near countless other methods) are then used to construct a mathematical model that relates the multivariate response (spectrum) to the concentration of the analyte of interest, and such a model can be used to efficiently predict the concentrations of new samples.
Techniques in multivariate calibration are often broadly categorized as classical or inverse methods. The principal difference between these approaches is that in classical calibration the models are solved such that they are optimal in describing the measured analytical responses (e.g., spectra) and can therefore be considered optimal descriptors, whereas in inverse methods the models are solved to be optimal in predicting the properties of interest (e.g., concentrations, optimal predictors). Inverse methods usually require less physical knowledge of the chemical system, and at least in theory provide superior predictions in the mean-squared error sense, and hence inverse approaches tend to be more frequently applied in contemporary multivariate calibration.
The main advantages of the use of multivariate calibration techniques is that fast, cheap, or non-destructive analytical measurements (such as optical spectroscopy) can be used to estimate sample properties which would otherwise require time-consuming, expensive or destructive testing (such as LC-MS). Equally important is that multivariate calibration allows for accurate quantitative analysis in the presence of heavy interference by other analytes. The selectivity of the analytical method is provided as much by the mathematical calibration, as the analytical measurement modalities. For example, near-infrared spectra, which are extremely broad and non-selective compared to other analytical techniques (such as infrared or Raman spectra), can often be used successfully in conjunction with carefully developed multivariate calibration methods to predict concentrations of analytes in very complex matrices.
Classification, pattern recognition, clustering
Supervised multivariate classification techniques are closely related to multivariate calibration techniques in that a calibration or training set is used to develop a mathematical model capable of classifying future samples. The techniques employed in chemometrics are similar to those used in other fields – multivariate discriminant analysis, logistic regression, neural networks, regression/classification trees. The use of rank reduction techniques in conjunction with these conventional classification methods is routine in chemometrics, for example discriminant analysis on principal components or partial least squares scores.
A family of techniques, referred to as class-modelling or one-class classifiers, are able to build models for an individual class of interest. Such methods are particularly useful in the case of quality control and authenticity verification of products.
Unsupervised classification (also termed cluster analysis) is also commonly used to discover patterns in complex data sets, and again many of the core techniques used in chemometrics are common to other fields such as machine learning and statistical learning.
Multivariate curve resolution
In chemometric parlance, multivariate curve resolution seeks to deconstruct data sets with limited or absent reference information and system knowledge. Some of the earliest work on these techniques was done by Lawton and Sylvestre in the early 1970s. These approaches are also called self-modeling mixture analysis, blind source/signal separation, and spectral unmixing. For example, from a data set comprising fluorescence spectra from a series of samples each containing multiple fluorophores, multivariate curve resolution methods can be used to extract the fluorescence spectra of the individual fluorophores, along with their relative concentrations in each of the samples, essentially unmixing the total fluorescence spectrum into the contributions from the individual components. The problem is usually ill-determined due to rotational ambiguity (many possible solutions can equivalently represent the measured data), so the application of additional constraints is common, such as non-negativity, unimodality, or known interrelationships between the individual components (e.g., kinetic or mass-balance constraints).
Other techniques
Experimental design remains a core area of study in chemometrics and several monographs are specifically devoted to experimental design in chemical applications. Sound principles of experimental design have been widely adopted within the chemometrics community, although many complex experiments are purely observational, and there can be little control over the properties and interrelationships of the samples and sample properties.
Signal processing is also a critical component of almost all chemometric applications, particularly the use of signal pretreatments to condition data prior to calibration or classification. The techniques employed commonly in chemometrics are often closely related to those used in related fields. Signal pre-processing may affect the way in which outcomes of the final data processing can be interpreted.
Performance characterization, and figures of merit Like most arenas in the physical sciences, chemometrics is quantitatively oriented, so considerable emphasis is placed on performance characterization, model selection, verification & validation, and figures of merit. The performance of quantitative models is usually specified by root mean squared error in predicting the attribute of interest, and the performance of classifiers as a true-positive rate/false-positive rate pairs (or a full ROC curve). A recent report by Olivieri et al. provides a comprehensive overview of figures of merit and uncertainty estimation in multivariate calibration, including multivariate definitions of selectivity, sensitivity, SNR and prediction interval estimation. Chemometric model selection usually involves the use of tools such as resampling (including bootstrap, permutation, cross-validation).
Multivariate statistical process control (MSPC), modeling and optimization accounts for a substantial amount of historical chemometric development. Spectroscopy has been used successfully for online monitoring of manufacturing processes for 30–40 years, and this process data is highly amenable to chemometric modeling. Specifically in terms of MSPC, multiway modeling of batch and continuous processes is increasingly common in industry and remains an active area of research in chemometrics and chemical engineering. Process analytical chemistry as it was originally termed, or the newer term process analytical technology continues to draw heavily on chemometric methods and MSPC.
Multiway methods are heavily used in chemometric applications. These are higher-order extensions of more widely used methods. For example, while the analysis of a table (matrix, or second-order array) of data is routine in several fields, multiway methods are applied to data sets that involve 3rd, 4th, or higher-orders. Data of this type is very common in chemistry, for example a liquid-chromatography / mass spectrometry (LC-MS) system generates a large matrix of data (elution time versus m/z) for each sample analyzed. The data across multiple samples thus comprises a data cube. Batch process modeling involves data sets that have time vs. process variables vs. batch number. The multiway mathematical methods applied to these sorts of problems include PARAFAC, trilinear decomposition, and multiway PLS and PCA.
References
Further reading
External links
An Introduction to Chemometrics (archived website)
IUPAC Glossary for Chemometrics
Homepage of Chemometrics, Sweden
Homepage of Chemometrics (a starting point)
Chemometric Analysis for Spectroscopy
General resource on advanced chemometric methods and recent developments
Computational chemistry
Metrics
Analytical chemistry
Cheminformatics | 0.80405 | 0.981116 | 0.788867 |
Organic reaction | Organic reactions are chemical reactions involving organic compounds. The basic organic chemistry reaction types are addition reactions, elimination reactions, substitution reactions, pericyclic reactions, rearrangement reactions, photochemical reactions and redox reactions. In organic synthesis, organic reactions are used in the construction of new organic molecules. The production of many man-made chemicals such as drugs, plastics, food additives, fabrics depend on organic reactions.
The oldest organic reactions are combustion of organic fuels and saponification of fats to make soap. Modern organic chemistry starts with the Wöhler synthesis in 1828. In the history of the Nobel Prize in Chemistry awards have been given for the invention of specific organic reactions such as the Grignard reaction in 1912, the Diels–Alder reaction in 1950, the Wittig reaction in 1979 and olefin metathesis in 2005.
Classifications
Organic chemistry has a strong tradition of naming a specific reaction to its inventor or inventors and a long list of so-called named reactions exists, conservatively estimated at 1000. A very old named reaction is the Claisen rearrangement (1912) and a recent named reaction is the Bingel reaction (1993). When the named reaction is difficult to pronounce or very long as in the Corey–House–Posner–Whitesides reaction it helps to use the abbreviation as in the CBS reduction. The number of reactions hinting at the actual process taking place is much smaller, for example the ene reaction or aldol reaction.
Another approach to organic reactions is by type of organic reagent, many of them inorganic, required in a specific transformation. The major types are oxidizing agents such as osmium tetroxide, reducing agents such as lithium aluminium hydride, bases such as lithium diisopropylamide and acids such as sulfuric acid.
Finally, reactions are also classified by mechanistic class. Commonly these classes are (1) polar, (2) radical, and (3) pericyclic. Polar reactions are characterized by the movement of electron pairs from a well-defined source (a nucleophilic bond or lone pair) to a well-defined sink (an electrophilic center with a low-lying antibonding orbital). Participating atoms undergo changes in charge, both in the formal sense as well as in terms of the actual electron density. The vast majority of organic reactions fall under this category. Radical reactions are characterized by species with unpaired electrons (radicals) and the movement of single electrons. Radical reactions are further divided into chain and nonchain processes. Finally, pericyclic reactions involve the redistribution of chemical bonds along a cyclic transition state. Although electron pairs are formally involved, they move around in a cycle without a true source or sink. These reactions require the continuous overlap of participating orbitals and are governed by orbital symmetry considerations. Of course, some chemical processes may involve steps from two (or even all three) of these categories, so this classification scheme is not necessarily straightforward or clear in all cases. Beyond these classes, transition-metal mediated reactions are often considered to form a fourth category of reactions, although this category encompasses a broad range of elementary organometallic processes, many of which have little in common and very specific.
Fundamentals
Factors governing organic reactions are essentially the same as that of any chemical reaction. Factors specific to organic reactions are those that determine the stability of reactants and products such as conjugation, hyperconjugation and aromaticity and the presence and stability of reactive intermediates such as free radicals, carbocations and carbanions.
An organic compound may consist of many isomers. Selectivity in terms of regioselectivity, diastereoselectivity and enantioselectivity is therefore an important criterion for many organic reactions. The stereochemistry of pericyclic reactions is governed by the Woodward–Hoffmann rules and that of many elimination reactions by Zaitsev's rule.
Organic reactions are important in the production of pharmaceuticals. In a 2006 review, it was estimated that 20% of chemical conversions involved alkylations on nitrogen and oxygen atoms, another 20% involved placement and removal of protective groups, 11% involved formation of new carbon–carbon bond and 10% involved functional group interconversions.
By mechanism
There is no limit to the number of possible organic reactions and mechanisms. However, certain general patterns are observed that can be used to describe many common or useful reactions. Each reaction has a stepwise reaction mechanism that explains how it happens, although this detailed description of steps is not always clear from a list of reactants alone. Organic reactions can be organized into several basic types. Some reactions fit into more than one category. For example, some substitution reactions follow an addition-elimination pathway. This overview isn't intended to include every single organic reaction. Rather, it is intended to cover the basic reactions.
In condensation reactions a small molecule, usually water, is split off when two reactants combine in a chemical reaction. The opposite reaction, when water is consumed in a reaction, is called hydrolysis. Many polymerization reactions are derived from organic reactions. They are divided into addition polymerizations and step-growth polymerizations.
In general the stepwise progression of reaction mechanisms can be represented using arrow pushing techniques in which curved arrows are used to track the movement of electrons as starting materials transition to intermediates and products.
By functional groups
Organic reactions can be categorized based on the type of functional group involved in the reaction as a reactant and the functional group that is formed as a result of this reaction. For example, in the Fries rearrangement the reactant is an ester and the reaction product an alcohol.
An overview of functional groups with their preparation and reactivity is presented below:
Other classification
In heterocyclic chemistry, organic reactions are classified by the type of heterocycle formed with respect to ring-size and type of heteroatom. See for instance the chemistry of indoles. Reactions are also categorized by the change in the carbon framework. Examples are ring expansion and ring contraction, homologation reactions, polymerization reactions, insertion reactions, ring-opening reactions and ring-closing reactions.
Organic reactions can also be classified by the type of bond to carbon with respect to the element involved. More reactions are found in organosilicon chemistry, organosulfur chemistry, organophosphorus chemistry and organofluorine chemistry. With the introduction of carbon-metal bonds the field crosses over to organometallic chemistry.
See also
List of organic reactions
Other chemical reactions: inorganic reactions, metabolism, organometallic reactions, polymerization reactions.
Important publications in organic chemistry
References
External links
Organic reactions @ Synarchive.com
Organic reaction flashcards from OSU
list of named reactions from UConn
organic reactions
Study-Organic-Chemistry.com
Organic chemistry | 0.800025 | 0.985854 | 0.788708 |
Chemical law | Chemical laws are those laws of nature relevant to chemistry. The most fundamental concept in chemistry is the law of conservation of mass, which states that there is no detectable change in the quantity of matter during an ordinary chemical reaction. Modern physics shows that it is actually energy that is conserved, and that energy and mass are related; a concept which becomes important in nuclear chemistry. Conservation of energy leads to the important concepts of equilibrium, thermodynamics, and kinetics.
The laws of stoichiometry, that is, the gravimetric proportions by which chemical elements participate in chemical reactions, elaborate on the law of conservation of mass. Joseph Proust's law of definite composition says that pure chemicals are composed of elements in a definite formulation.
Dalton's law of multiple proportions says that these chemicals will present themselves in proportions that are small whole numbers (i.e. 1:2 O:H in water); although in many systems (notably biomacromolecules and minerals) the ratios tend to require large numbers, and are frequently represented as a fraction. Such compounds are known as non-stoichiometric compounds.
The third stoichiometric law is the law of reciprocal proportions, which provides the basis for establishing equivalent weights for each chemical element. Elemental equivalent weights can then be used to derive atomic weights for each element.
More modern laws of chemistry define the relationship between energy and transformations.
In equilibrium, molecules exist in mixture defined by the transformations possible on the timescale of the equilibrium, and are in a ratio defined by the intrinsic energy of the molecules—the lower the intrinsic energy, the more abundant the molecule.
Transforming one structure to another requires the input of energy to cross an energy barrier; this can come from the intrinsic energy of the molecules themselves, or from an external source which will generally accelerate transformations. The higher the energy barrier, the slower the transformation occurs.
There is a transition state (TS), that corresponds to the structure at the top of the energy barrier. The Hammond-Leffler Postulate states that this state looks most similar to the product or starting material which has intrinsic energy closest to that of the energy barrier. Stabilizing this transition state through chemical interaction is one way to achieve catalysis.
All chemical processes are reversible (law of microscopic reversibility) although some processes have such an energy bias, they are essentially irreversible.
References
History of chemistry | 0.818721 | 0.963248 | 0.788631 |
Click chemistry | Click chemistry is an approach to chemical synthesis that emphasizes efficiency, simplicity, selectivity, and modularity in chemical processes used to join molecular building blocks. It includes both the development and use of "click reactions", a set of simple, biocompatible chemical reactions that meet specific criteria like high yield, fast reaction rates, and minimal byproducts. It was first fully described by Sharpless, Hartmuth C. Kolb, and M. G. Finn of The Scripps Research Institute in 2001. In this seminal paper, Sharpless argued that synthetic chemistry could emulate the way nature constructs complex molecules, using efficient reactions to join together simple, non-toxic building blocks.
The term "click chemistry" was coined in 1998 by K. Barry Sharpless' wife, Jan Dueser, who found the simplicity of this approach to chemical synthesis akin to clicking together Lego blocks. In fact, the simplicity of click chemistry represented a paradigm shift in synthetic chemistry, and has had significant impact in many industries, especially pharmaceutical development. In 2022, the Nobel Prize in Chemistry was jointly awarded to Carolyn R. Bertozzi, Morten P. Meldal and K. Barry Sharpless, "for the development of click chemistry and bioorthogonal chemistry".
Principles
For a reaction to be considered a click reaction, it must satisfy certain characteristics:
modularity
insensitivity to solvent parameters
high chemical yields
insensitivity towards oxygen and water
regiospecificity and stereospecificity
a large thermodynamic driving force (>20 kcal/mol) to favor a reaction with a single reaction product. A distinct exothermic reaction makes a reactant "spring-loaded".
The process would preferably:
have simple reaction conditions
use readily available starting materials and reagents
use no solvent or use a solvent that is benign or easily removed (preferably water)
provide simple product isolation by non-chromatographic methods (crystallisation or distillation)
have high atom economy.
Many of the click chemistry criteria are subjective, and even if measurable and objective criteria could be agreed upon, it is unlikely that any reaction will be perfect for every situation and application. However, several reactions have been identified that fit the concept better than others:
[3+2] cycloadditions, such as the Huisgen 1,3-dipolar cycloaddition, in particular the Cu(I)-catalyzed stepwise variant, are often referred to simply as Click reactions
Thiol-ene reaction
Diels-Alder reaction and inverse electron demand Diels-Alder reaction
[4+1] cycloadditions between isonitriles (isocyanides) and tetrazines
nucleophilic substitution especially to small strained rings like epoxy and aziridines
carbonyl-chemistry-like formation of ureas but not reactions of the aldol type due to low thermodynamic driving force.
addition reactions to carbon-carbon double bonds like dihydroxylation or the alkynes in the thiol-yne reaction.
Sulfur (VI) Fluoride exchange
Specific Click Reactions
Copper(I)-catalyzed azide-alkyne cycloaddition (CuAAC)
The classic click reaction is the copper-catalyzed reaction of an azide with an alkyne to form a 5-membered heteroatom ring: a Cu(I)-catalyzed azide-alkyne cycloaddition (CuAAC). The first triazole synthesis, from diethyl acetylenedicarboxylate and phenyl azide, was reported by Arthur Michael in 1893. Later, in the middle of the 20th century, this family of 1,3-dipolar cycloadditions took on Rolf Huisgen's name after his studies of their reaction kinetics and conditions.
The copper(I)-catalysis of the Huisgen 1,3-dipolar cycloaddition was discovered concurrently and independently by the groups of Valery V. Fokin and K. Barry Sharpless at the Scripps Research Institute in California and Morten Meldal in the Carlsberg Laboratory, Denmark. The copper-catalyzed version of this reaction gives only the 1,4-isomer, whereas Huisgen's non-catalyzed 1,3-dipolar cycloaddition gives both the 1,4- and 1,5-isomers, is slow, and requires a temperature of 100 degrees Celsius.
Moreover, this copper-catalyzed "click" does not require ligands on the metal, although accelerating ligands such as tris(triazolyl)methyl amine ligands with various substituents have been reported and used with success in aqueous solution. Other ligands such as PPh3 and TBIA can also be used, even though PPh3 is liable to Staudinger ligation with the azide substituent. Cu2O in water at room temperature was found also to catalyze the same reaction in 15 minutes with 91% yield.
The first reaction mechanism proposed included one catalytic copper atom; but isotope, kinetic, and other studies have suggested a dicopper mechanism may be more relevant. Even though this reaction proceeds effectively at biological conditions, copper in this range of dosage is cytotoxic. Solutions to this problem have been presented, such as using water-soluble ligands on the copper to enhance cell penetration of the catalyst and thereby reduce the dosage needed, or to use chelating ligands to further increase the effective concentration of Cu(I) and thereby decreasing the actual dosage.
Although the Cu(I)-catalyzed variant was first reported by Meldal and co-workers for the synthesis of peptidotriazoles on solid support, their conditions were far from the true spirit of click chemistry and were overtaken by the publicly more recognized Sharpless. Meldal and co-workers also chose not to label this reaction type "click chemistry" which allegedly caused their discovery to be largely overlooked by the mainstream chemical society. Fokin and Sharpless independently described it as a reliable catalytic process offering "an unprecedented level of selectivity, reliability, and scope for those organic synthesis endeavors which depend on the creation of covalent links between diverse building blocks".
An analogous RuAAC reaction catalyzed by ruthenium, instead of copper, was reported by the Jia and Fokin groups in 2005, and allows for the selective production of 1,5-isomers.
Strain-promoted azide-alkyne cycloaddition (SPAAC)
The Bertozzi group further developed one of Huisgen's copper-free click reactions to overcome the cytotoxicity of the CuAAC reaction. Instead of using Cu(I) to activate the alkyne, the alkyne is instead introduced in a strained (DIFO), in which the electron-withdrawing, propargylic, gem-fluorines act together with the ring strain to greatly destabilize the alkyne. This destabilization increases the reaction driving force, and the desire of the cycloalkyne to relieve its ring strain.
This reaction proceeds as a concerted [3+2] cycloaddition to the triple bond in a cyclooctyne in the same mechanism as the Huisgen 1,3-dipolar cycloaddition. Substituents other than fluorines, such as benzene rings, are also allowed on the cyclooctyne.
This reaction has been used successfully to probe for azides in living systems, even though the reaction rate is somewhat slower than that of the CuAAC. Moreover, because the synthesis of cyclooctynes often gives low yield, probe development for this reaction has not been as rapid as for other reactions. But cyclooctyne derivatives such as DIFO, dibenzylcyclooctyne (DIBO or DBCO) and biarylazacyclooctynone (BARAC) have all been used successfully in the SPAAC reaction to probe for azides in living systems.
Strain-promoted alkyne-nitrone cycloaddition (SPANC)
Diaryl-strained-cyclooctynes including dibenzylcyclooctyne (DIBO) have also been used to react with 1,3-nitrones in strain-promoted alkyne-nitrone cycloadditions (SPANC) to yield N-alkylated isoxazolines.
Because this reaction is metal-free and proceeds with fast kinetics (k2 as fast as 60 1/Ms, faster than both the CuAAC or the SPAAC) SPANC can be used for live cell labeling. Moreover, substitution on both the carbon and nitrogen atoms of the nitrone dipole, and acyclic and endocyclic nitrones are all tolerated. This large allowance provides a lot of flexibility for nitrone handle or probe incorporation.
However, the isoxazoline product is not as stable as the triazole product of the CuAAC and the SpAAC, and can undergo rearrangements at biological conditions. Regardless, this reaction is still very useful as it has notably fast reaction kinetics.
The applications of this reaction include labeling proteins containing serine as the first residue: the serine is oxidized to aldehyde with NaIO4 and then converted to nitrone with p-methoxybenzenethiol, N-methylhydroxylamine and p-ansidine, and finally incubated with cyclooctyne to give a click product. The SPANC also allows for multiplex labeling.
Reactions of strained alkenes
Strained alkenes also utilize strain-relief as a driving force that allows for their participation in click reactions. Trans-cycloalkenes (usually cyclooctenes) and other strained alkenes such as oxanorbornadiene react in click reactions with a number of partners including azides, tetrazines and tetrazoles. These reaction partners can interact specifically with the strained alkene, staying bioorthogonal to endogenous alkenes found in lipids, fatty acids, cofactors and other natural products.
Alkene and azide [3+2] cycloaddition
Oxanorbornadiene (or another activated alkene) reacts with azides, giving triazoles as a product. However, these product triazoles are not aromatic as they are in the CuAAC or SPAAC reactions, and as a result are not as stable. The activated double bond in oxanobornadiene makes a triazoline intermediate that subsequently spontaneously undergoes a retro Diels-alder reaction to release furan and give 1,2,3- or 1,4,5-triazoles. Even though this reaction is slow, it is useful because oxabornodiene is relatively simple to synthesize. The reaction is not, however, entirely chemoselective.
Alkene and tetrazine inverse-demand Diels-Alder
Strained cyclooctenes and other activated alkenes react with tetrazines in an inverse electron-demand Diels-Alder followed by a retro [4+2] cycloaddition (see figure). Like the other reactions of the trans-cyclooctene, ring strain release is a driving force for this reaction. Thus, three-membered and four-membered cycloalkenes, due to their high ring strain, make ideal alkene substrates.
Similar to other [4+2] cycloadditions, electron-donating substituents on the dienophile and electron-withdrawing substituents on the diene accelerate the inverse-demand Diels-Alder. The diene, the tetrazine, by virtue of having the additional nitrogens, is a good diene for this reaction. The dienophile, the activated alkene, can often be attached to electron-donating alkyl groups on target molecules, thus making the dienophile more suitable for the reaction.
Alkene and tetrazole photoclick reaction
The tetrazole-alkene "photoclick" reaction is another dipolar addition that Huisgen first introduced in the late 1960s ChemBioChem 2007, 8, 1504. (68) Clovis, J. S.; Eckell, A.; Huisgen, R.; Sustmann, R. Chem. Ber. 1967, 100, 60.) Tetrazoles with amino or styryl groups that can be activated by UV light at 365 nm (365 does not damage cells) react quickly (so that the UV light does not have to be on for a long time, usually around 1–4 minutes) to make fluorogenic pyrazoline products. This reaction scheme is well suited for the purpose of labeling in live cells, because UV light at 365 nm damages cells minimally. Moreover, the reaction proceeds quickly, so that the UV light can be administered for short durations. Quantum yields for short wavelength UV light can be higher than 0.5. This allows tetrazoles to be used wavelength selectively in combination with another photoligation reaction, where at the short wavelength the tetrazole ligation reaction proceeds nearly exclusively and at longer wavelength another reaction (ligation via o-quinodimethanes) proceeds exclusively. Finally, the non-fluorogenic reactants give rise to a fluorogenic product, equipping the reaction with a built-in spectrometry handle.
Both tetrazoles and the alkene groups have been incorporated as protein handles as unnatural amino acids, but this benefit is not unique. Instead, the photoinducibility of the reaction makes it a prime candidate for spatiotemporal specificity in living systems. Challenges include the presence of endogenous alkenes, though usually cis (as in fatty acids) they can still react with the activated tetrazole.
Applications
The criteria for click reactions are designed to make the chemistry biocompatible, for applications like isolating and targeting molecules in complex biological environments. In such environments, products accordingly need to be physiologically stable and any byproducts need to be non-toxic (for in vivo systems).
In many applications, click reactions join a biomolecule and a reporter molecule or other molecular probe, a process called bioconjugation. The possibility of attaching fluorophores and other reporter molecules has made click chemistry a very powerful tool for identifying, locating, and characterizing both old and new biomolecules..
One of the earliest and most important methods in bioconjugation was to express a reporter gene, such as the gene green fluorescent protein (GFP), on the same genetic sequence as a protein of interest. In this way, the protein can be identified in cells and tissues by the green florescence. However, this approach comes with several difficulties, as the GFP can affect the ability of the protein to achieve its normal shape or hinder its normal expression and functions. Additionally, using this method, GFP can only be attached to proteins, leaving other important biomolecular classes (nucleic acids, lipids, carbohydrates, etc.) out of reach.
To overcome these challenges, chemists have opted to proceed by identifying pairs of bioorthogonal reaction partners, thus allowing the use of small exogenous molecules as biomolecular probes. A fluorophore can be attached to one of these probes to give a fluorescence signal upon binding of the reporter molecule to the target—just as GFP fluoresces when it is expressed with the target.
Now limitations emerge from the chemistry of the probe to its target. In order for this technique to be useful in biological systems, click chemistry must run at or near biological conditions, produce little and (ideally) non-toxic byproducts, have (preferably) single and stable products at the same conditions, and proceed quickly to high yield in one pot. Existing reactions, such as Staudinger ligation and the Huisgen 1,3-dipolar cycloaddition, have been modified and optimized for such reaction conditions. Today, research in the field concerns not only understanding and developing new reactions and repurposing and re-understanding known reactions, but also expanding methods used to incorporate reaction partners into living systems, engineering novel reaction partners, and developing applications for bioconjugation.
By developing specific and controllable bioorthogonal reactions, scientists have opened up the possibility of hitting particular targets in complex cell lysates. Recently, scientists have adapted click chemistry for use in live cells, for example using small molecule probes that find and attach to their targets by click reactions. Despite challenges of cell permeability, bioorthogonality, background labeling, and reaction efficiency, click reactions have already proven useful in a new generation of pulldown experiments (in which particular targets can be isolated using, for instance, reporter molecules which bind to a certain column), and fluorescence spectrometry (in which the fluorophore is attached to a target of interest and the target quantified or located). More recently, novel methods have been used to incorporate click reaction partners onto and into biomolecules, including the incorporation of unnatural amino acids containing reactive groups into proteins and the modification of nucleotides. These techniques represent a part of the field of chemical biology, in which click chemistry plays a fundamental role by intentionally and specifically coupling modular units to various ends.
Biotech company Shasqi is a company leveraging click chemistry in humans.
Click chemistry is not limited to biological conditions: the concept of a "click" reaction has been used in chemoproteomic, pharmacological, biomimetic and molecular machinery applications.
Click Chemistry is a powerful tool to probe for the cellular localization of small molecules. Knowing where a small molecules goes in the cell gives powerful insights into their mechanisms of action. This approach has been used in numerous studies, and discoveries include that salinomycin localizes to lysosomes to initiate ferroptosis in cancer stem cells and that metformin derivatives accumulate in mitochondria to chelate copper(II), affecting metabolism and epigenetic changes downstream in inflammatory macrophages.
The commercial potential of click chemistry is great. The fluorophore rhodamine has been coupled onto norbornene, and reacted with tetrazine in living systems. In other cases, SPAAC between a cyclooctyne-modified fluorophore and azide-tagged proteins allowed the selection of these proteins in cell lysates.
Methods for the incorporation of click reaction partners into systems in and ex vivo contribute to the scope of possible reactions. The development of unnatural amino acid incorporation by ribosomes has allowed for the incorporation of click reaction partners as unnatural side groups on these unnatural amino acids. For example, an UAA with an azide side group provides convenient access for cycloalkynes to proteins tagged with this "AHA" unnatural amino acid. In another example, "CpK" has a side group including a cyclopropane alpha to an amide bond that serves as a reaction partner to tetrazine in an inverse diels-alder reaction.
The synthesis of luciferin exemplifies another strategy of isolating reaction partners, which is to take advantage of rarely-occurring, natural groups such as the 1,2-aminothiol, which appears only when a cysteine is the final N' amino acid in a protein. Their natural selectivity and relative bioorthogonality is thus valuable in developing probes specific for these tags. The above reaction occurs between a 1,2-aminothiol and a 2-cyanobenzothiazole to make luciferin, which is fluorescent. This luciferin fluorescence can be then quantified by spectrometry following a wash, and used to determine the relative presence of the molecule bearing the 1,2-aminothiol. If the quantification of non-1,2-aminothiol-bearing protein is desired, the protein of interest can be cleaved to yield a fragment with a N' Cys that is vulnerable to the 2-CBT.
Additional applications include:
two-dimensional gel electrophoresis separation
preparative organic synthesis of 1,4-substituted triazoles
modification of peptide function with triazoles
modification of natural products and pharmaceuticals
natural product discovery
drug discovery
macrocyclizations using Cu(I) catalyzed triazole couplings
modification of DNA and nucleotides by triazole ligation
supramolecular chemistry: calixarenes, rotaxanes, and catenanes
dendrimer design
carbohydrate clusters and carbohydrate conjugation by Cu(1) catalyzed triazole ligation reactions
polymers and biopolymers
surfaces
material science
nanotechnology,
bioconjugation, for example, azidocoumarin, and
biomaterials
In combination with combinatorial chemistry, high-throughput screening, and building chemical libraries, click chemistry has hastened new drug discoveries by making each reaction in a multistep synthesis fast, efficient, and predictable.
Technology license
The Scripps Research Institute has a portfolio of click-chemistry patents. Licensees include Invitrogen, Allozyne, Aileron, Integrated Diagnostics, and the biotech company , a BASF spin-off created to sell products made using click chemistry. Moreover, holds a worldwide exclusive license for the research and diagnostic market for the nucleic acid field.
Fluorescent azides and alkynes are also produced by companies such as Cyandye.
References
External links
Click Chemistry: Short Review and Recent Literature
National Science Foundation: Feature "Going Live with Click Chemistry"
Chemical and Engineering News: Feature "In-Situ Click Chemistry"
Chemical and Engineering News: Feature "Copper-free Click Chemistry"
Metal-free click chemistry review
Click Chemistry a Chem Soc Rev themed issue highlighting the latest applications of click chemistry, guest edited by M. G. Finn and Valery Fokin. Published by the Royal Society of Chemistry
Organic chemistry | 0.792642 | 0.994864 | 0.788571 |
Organic redox reaction | Organic reductions or organic oxidations or organic redox reactions are redox reactions that take place with organic compounds. In organic chemistry oxidations and reductions are different from ordinary redox reactions, because many reactions carry the name but do not actually involve electron transfer. Instead the relevant criterion for organic oxidation is gain of oxygen and/or loss of hydrogen. Simple functional groups can be arranged in order of increasing oxidation state. The oxidation numbers are only an approximation:
When methane is oxidized to carbon dioxide its oxidation number changes from −4 to +4. Classical reductions include alkene reduction to alkanes and classical oxidations include oxidation of alcohols to aldehydes. In oxidations electrons are removed and the electron density of a molecule is reduced. In reductions electron density increases when electrons are added to the molecule. This terminology is always centered on the organic compound. For example, it is usual to refer to the reduction of a ketone by lithium aluminium hydride, but not to the oxidation of lithium aluminium hydride by a ketone. Many oxidations involve removal of hydrogen atoms from the organic molecule, and reduction adds hydrogens to an organic molecule.
Many reactions classified as reductions also appear in other classes. For instance, conversion of the ketone to an alcohol by lithium aluminium hydride can be considered a reduction but the hydride is also a good nucleophile in nucleophilic substitution. Many redox reactions in organic chemistry have coupling reaction reaction mechanism involving free radical intermediates. True organic redox chemistry can be found in electrochemical organic synthesis or electrosynthesis. Examples of organic reactions that can take place in an electrochemical cell are the Kolbe electrolysis.
In disproportionation reactions the reactant is both oxidised and reduced in the same chemical reaction forming two separate compounds.
Asymmetric catalytic reductions and asymmetric catalytic oxidations are important in asymmetric synthesis.
Organic oxidations
Most oxidations are conducted with air or oxygen, especially in industry. These oxidation include routes to chemical compounds, remediation of pollutants, and combustion. Some commercially important oxidations are listed:
Many reagents have been invented for organic oxidations. Organic oxidations reagents are usually classified according to the functional group attacked by the oxidant:
Oxidation of C-H bonds:
Oxidation of C-C, C=C, and bonds
Oxidation of alcohols and various carbonyls
Often the substrate to be oxidized features more than one functional group. In such cases, selective oxidations become important.
Organic reductions
In organic chemistry, reduction is equivalent to the addition of hydrogen atoms, usually in pairs. The reaction of unsaturated organic compounds with hydrogen gas is called hydrogenation. The reaction of saturated organic compounds with hydrogen gas is called hydrogenolysis. Hydrogenolyses necessarily cleaves C-X bonds (X = C, O, N, etc.). Reductions can also be effected by adding hydride and proton sources, the so-called heterolytic pathway. Such reactions are often effected using stoichiometric hydride reagents such as sodium borohydride or lithium aluminium hydride.
See also
Oxidizing agent
Reducing agent
Transfer hydrogenation
Electrosynthesis
Functional group oxidations
Alcohol oxidation
Oxidation of oximes and primary amines to nitro compounds
Glycol cleavage
Oxidative cleavage of α-Hydroxy acids
Alkene oxidations
Oxidation of primary amines to nitriles
Oxidation of thiols to sulfonic acids
Oxidation of hydrazines to azo compounds
Functional group reductions
Carbonyl reduction
Amide reduction
Nitrile reduction
Reduction of nitro compounds
Reduction of imines and Schiff bases
Reduction of aromatic compounds to saturated rings
References
Redox | 0.79315 | 0.994033 | 0.788417 |
Granularity | Granularity (also called graininess) is the degree to which a material or system is composed of distinguishable pieces, "granules" or "grains" (metaphorically).
It can either refer to the extent to which a larger entity is subdivided, or the extent to which groups of smaller indistinguishable entities have joined together to become larger distinguishable entities.
Precision and ambiguity
Coarse-grained materials or systems have fewer, larger discrete components than fine-grained materials or systems.
A coarse-grained description of a system regards large subcomponents.
A fine-grained description regards smaller components of which the larger ones are composed.
The concepts granularity, coarseness, and fineness are relative; and are used when comparing systems or descriptions of systems. An example of increasingly fine granularity: a list of nations in the United Nations, a list of all states/provinces in those nations, a list of all cities in those states, etc.
Physics
A fine-grained description of a system is a detailed, exhaustive, low-level model of it. A coarse-grained description is a model where some of this fine detail has been smoothed over or averaged out. The replacement of a fine-grained description with a lower-resolution coarse-grained model is called coarse-graining. (See for example the second law of thermodynamics)
Molecular dynamics
In molecular dynamics, coarse graining consists of replacing an atomistic
description of a biological molecule with a lower-resolution coarse-grained model that averages or smooths away fine details.
Coarse-grained models have been developed for investigating the longer time- and length-scale dynamics that are critical to many biological processes, such as lipid membranes and proteins. These concepts not only apply to biological molecules but also inorganic molecules.
Coarse graining may remove certain degrees of freedom, such as the vibrational modes between two atoms, or represent the two atoms as a single particle. The ends to which systems may be coarse-grained is simply bound by the accuracy in the dynamics and structural properties one wishes to replicate. This modern area of research is in its infancy, and although it is commonly used in biological modeling, the analytic theory behind it is poorly understood.
Computing
In parallel computing, granularity means the amount of computation in relation to communication, i.e., the ratio of computation to the amount of communication.
Fine-grained parallelism means individual tasks are relatively small in terms of code size and execution time. The data is transferred among processors frequently in amounts of one or a few memory words. Coarse-grained is the opposite: data is communicated infrequently, after larger amounts of computation.
The finer the granularity, the greater the potential for parallelism and hence speed-up, but the greater the overheads of synchronization and communication. Granularity disintegrators exist as well and are important to understand in order to determine the accurate level of granularity.
In order to attain the best parallel performance, the best balance between load and communication overhead needs to be found. If the granularity is too fine, the performance can suffer from the increased communication overhead. On the other side, if the granularity is too coarse, the performance can suffer from load imbalance.
Reconfigurable computing and supercomputing
In reconfigurable computing and in supercomputing these terms refer to the data path width. The use of about one-bit wide processing elements like the configurable logic blocks (CLBs) in an FPGA is called fine-grained computing or fine-grained reconfigurability, whereas using wide data paths, such as, for instance, 32 bits wide resources, like microprocessor CPUs or data-stream-driven data path units (DPUs) like in a reconfigurable datapath array (rDPA) is called coarse-grained computing or coarse-grained reconfigurability.
Data and information
The granularity of data refers to the size in which data fields are sub-divided. For example, a postal address can be recorded, with coarse granularity, as a single field:
address = 200 2nd Ave. South #358, St. Petersburg, FL 33701-4313 USA
or with fine granularity, as multiple fields:
street address = 200 2nd Ave. South #358
city = St. Petersburg
state = FL
postal code = 33701-4313
country = USA
or even finer granularity:
street = 2nd Ave. South
address number = 200
suite/apartment = #358
city = St. Petersburg
state = FL
postal-code = 33701
postal-code-add-on = 4313
country = USA
Finer granularity has overheads for data input and storage. This manifests itself in a higher number of objects and methods in the object-oriented programming paradigm or more subroutine calls for procedural programming and parallel computing environments. It does however offer benefits in flexibility of data processing in treating each data field in isolation if required. A performance problem caused by excessive granularity may not reveal itself until scalability becomes an issue.
Within database design and data warehouse design, data grain can also refer to the smallest combination of columns in a table which makes the rows (also called records) unique.
See also
Complex systems
Complexity
Cybernetics
Granular computing
Granularity (parallel computing)
Dennett's three stances
High- and low-level
Levels of analysis
Meta-systems
Multiple granularity locking
Precision (computer science)
Self-organization
Specificity (linguistics)
Systems thinking
Notes
References
Statistical mechanics
Business terms | 0.800272 | 0.98512 | 0.788365 |
Geology | Geology is a branch of natural science concerned with the Earth and other astronomical objects, the rocks of which they are composed, and the processes by which they change over time. Modern geology significantly overlaps all other Earth sciences, including hydrology. It is integrated with Earth system science and planetary science.
Geology describes the structure of the Earth on and beneath its surface and the processes that have shaped that structure. Geologists study the mineralogical composition of rocks in order to get insight into their history of formation. Geology determines the relative ages of rocks found at a given location; geochemistry (a branch of geology) determines their absolute ages. By combining various petrological, crystallographic, and paleontological tools, geologists are able to chronicle the geological history of the Earth as a whole. One aspect is to demonstrate the age of the Earth. Geology provides evidence for plate tectonics, the evolutionary history of life, and the Earth's past climates.
Geologists broadly study the properties and processes of Earth and other terrestrial planets. Geologists use a wide variety of methods to understand the Earth's structure and evolution, including fieldwork, rock description, geophysical techniques, chemical analysis, physical experiments, and numerical modelling. In practical terms, geology is important for mineral and hydrocarbon exploration and exploitation, evaluating water resources, understanding natural hazards, remediating environmental problems, and providing insights into past climate change. Geology is a major academic discipline, and it is central to geological engineering and plays an important role in geotechnical engineering.
Geological material
The majority of geological data comes from research on solid Earth materials. Meteorites and other extraterrestrial natural materials are also studied by geological methods.
Minerals
Minerals are naturally occurring elements and compounds with a definite homogeneous chemical composition and an ordered atomic arrangement.
Each mineral has distinct physical properties, and there are many tests to determine each of them. Minerals are often identified through these tests. The specimens can be tested for:
Color: Minerals are grouped by their color. Mostly diagnostic but impurities can change a mineral's color.
Streak: Performed by scratching the sample on a porcelain plate. The color of the streak can help identify the mineral.
Hardness: The resistance of a mineral to scratching or indentation.
Breakage pattern: A mineral can either show fracture or cleavage, the former being breakage of uneven surfaces, and the latter a breakage along closely spaced parallel planes.
Luster: Quality of light reflected from the surface of a mineral. Examples are metallic, pearly, waxy, dull.
Specific gravity: the weight of a specific volume of a mineral.
Effervescence: Involves dripping hydrochloric acid on the mineral to test for fizzing.
Magnetism: Involves using a magnet to test for magnetism.
Taste: Minerals can have a distinctive taste such as halite (which tastes like table salt).
Rock
A rock is any naturally occurring solid mass or aggregate of minerals or mineraloids. Most research in geology is associated with the study of rocks, as they provide the primary record of the majority of the geological history of the Earth. There are three major types of rock: igneous, sedimentary, and metamorphic. The rock cycle
illustrates the relationships among them (see diagram).
When a rock solidifies or crystallizes from melt (magma or lava), it is an igneous rock. This rock can be weathered and eroded, then redeposited and lithified into a sedimentary rock. Sedimentary rocks are mainly divided into four categories: sandstone, shale, carbonate, and evaporite. This group of classifications focuses partly on the size of sedimentary particles (sandstone and shale), and partly on mineralogy and formation processes (carbonation and evaporation). Igneous and sedimentary rocks can then be turned into metamorphic rocks by heat and pressure that change its mineral content, resulting in a characteristic fabric. All three types may melt again, and when this happens, new magma is formed, from which an igneous rock may once again solidify.
Organic matter, such as coal, bitumen, oil, and natural gas, is linked mainly to organic-rich sedimentary rocks.
To study all three types of rock, geologists evaluate the minerals of which they are composed and their other physical properties, such as texture and fabric.
Unlithified material
Geologists also study unlithified materials (referred to as superficial deposits) that lie above the bedrock. This study is often known as Quaternary geology, after the Quaternary period of geologic history, which is the most recent period of geologic time.
Magma
Magma is the original unlithified source of all igneous rocks. The active flow of molten rock is closely studied in volcanology, and igneous petrology aims to determine the history of igneous rocks from their original molten source to their final crystallization.
Whole-Earth structure
Plate tectonics
In the 1960s, it was discovered that the Earth's lithosphere, which includes the crust and rigid uppermost portion of the upper mantle, is separated into tectonic plates that move across the plastically deforming, solid, upper mantle, which is called the asthenosphere. This theory is supported by several types of observations, including seafloor spreading and the global distribution of mountain terrain and seismicity.
There is an intimate coupling between the movement of the plates on the surface and the convection of the mantle (that is, the heat transfer caused by the slow movement of ductile mantle rock). Thus, oceanic parts of plates and the adjoining mantle convection currents always move in the same direction – because the oceanic lithosphere is actually the rigid upper thermal boundary layer of the convecting mantle. This coupling between rigid plates moving on the surface of the Earth and the convecting mantle is called plate tectonics.
The development of plate tectonics has provided a physical basis for many observations of the solid Earth. Long linear regions of geological features are explained as plate boundaries:
Mid-ocean ridges, high regions on the seafloor where hydrothermal vents and volcanoes exist, are seen as divergent boundaries, where two plates move apart.
Arcs of volcanoes and earthquakes are theorized as convergent boundaries, where one plate subducts, or moves, under another.
Transform boundaries, such as the San Andreas Fault system, are where plates slide horizontally past each other.
Plate tectonics has provided a mechanism for Alfred Wegener's theory of continental drift, in which the continents move across the surface of the Earth over geological time. They also provided a driving force for crustal deformation, and a new setting for the observations of structural geology. The power of the theory of plate tectonics lies in its ability to combine all of these observations into a single theory of how the lithosphere moves over the convecting mantle.
Earth structure
Advances in seismology, computer modeling, and mineralogy and crystallography at high temperatures and pressures give insights into the internal composition and structure of the Earth.
Seismologists can use the arrival times of seismic waves to image the interior of the Earth. Early advances in this field showed the existence of a liquid outer core (where shear waves were not able to propagate) and a dense solid inner core. These advances led to the development of a layered model of the Earth, with a lithosphere (including crust) on top, the mantle below (separated within itself by seismic discontinuities at 410 and 660 kilometers), and the outer core and inner core below that. More recently, seismologists have been able to create detailed images of wave speeds inside the earth in the same way a doctor images a body in a CT scan. These images have led to a much more detailed view of the interior of the Earth, and have replaced the simplified layered model with a much more dynamic model.
Mineralogists have been able to use the pressure and temperature data from the seismic and modeling studies alongside knowledge of the elemental composition of the Earth to reproduce these conditions in experimental settings and measure changes within the crystal structure. These studies explain the chemical changes associated with the major seismic discontinuities in the mantle and show the crystallographic structures expected in the inner core of the Earth.
Geological time
The geological time scale encompasses the history of the Earth. It is bracketed at the earliest by the dates of the first Solar System material at 4.567 Ga (or 4.567 billion years ago) and the formation of the Earth at
4.54 Ga
(4.54 billion years), which is the beginning of the Hadean eona division of geological time. At the later end of the scale, it is marked by the present day (in the Holocene epoch).
Timescale of the Earth
Important milestones on Earth
4.567 Ga (gigaannum: billion years ago): Solar system formation
4.54 Ga: Accretion, or formation, of Earth
c. 4 Ga: End of Late Heavy Bombardment, the first life
c. 3.5 Ga: Start of photosynthesis
c. 2.3 Ga: Oxygenated atmosphere, first snowball Earth
730–635 Ma (megaannum: million years ago): second snowball Earth
541 ± 0.3 Ma: Cambrian explosion – vast multiplication of hard-bodied life; first abundant fossils; start of the Paleozoic
c. 380 Ma: First vertebrate land animals
250 Ma: Permian-Triassic extinction – 90% of all land animals die; end of Paleozoic and beginning of Mesozoic
66 Ma: Cretaceous–Paleogene extinction – Dinosaurs die; end of Mesozoic and beginning of Cenozoic
c. 7 Ma: First hominins appear
3.9 Ma: First Australopithecus, direct ancestor to modern Homo sapiens, appear
200 ka (kiloannum: thousand years ago): First modern Homo sapiens appear in East Africa
Timescale of the Moon
Timescale of Mars
Dating methods
Relative dating
Methods for relative dating were developed when geology first emerged as a natural science. Geologists still use the following principles today as a means to provide information about geological history and the timing of geological events.
The principle of uniformitarianism states that the geological processes observed in operation that modify the Earth's crust at present have worked in much the same way over geological time. A fundamental principle of geology advanced by the 18th-century Scottish physician and geologist James Hutton is that "the present is the key to the past." In Hutton's words: "the past history of our globe must be explained by what can be seen to be happening now."
The principle of intrusive relationships concerns crosscutting intrusions. In geology, when an igneous intrusion cuts across a formation of sedimentary rock, it can be determined that the igneous intrusion is younger than the sedimentary rock. Different types of intrusions include stocks, laccoliths, batholiths, sills and dikes.
The principle of cross-cutting relationships pertains to the formation of faults and the age of the sequences through which they cut. Faults are younger than the rocks they cut; accordingly, if a fault is found that penetrates some formations but not those on top of it, then the formations that were cut are older than the fault, and the ones that are not cut must be younger than the fault. Finding the key bed in these situations may help determine whether the fault is a normal fault or a thrust fault.
The principle of inclusions and components states that, with sedimentary rocks, if inclusions (or clasts) are found in a formation, then the inclusions must be older than the formation that contains them. For example, in sedimentary rocks, it is common for gravel from an older formation to be ripped up and included in a newer layer. A similar situation with igneous rocks occurs when xenoliths are found. These foreign bodies are picked up as magma or lava flows, and are incorporated, later to cool in the matrix. As a result, xenoliths are older than the rock that contains them.
The principle of original horizontality states that the deposition of sediments occurs as essentially horizontal beds. Observation of modern marine and non-marine sediments in a wide variety of environments supports this generalization (although cross-bedding is inclined, the overall orientation of cross-bedded units is horizontal).
The principle of superposition states that a sedimentary rock layer in a tectonically undisturbed sequence is younger than the one beneath it and older than the one above it. Logically a younger layer cannot slip beneath a layer previously deposited. This principle allows sedimentary layers to be viewed as a form of the vertical timeline, a partial or complete record of the time elapsed from deposition of the lowest layer to deposition of the highest bed.
The principle of faunal succession is based on the appearance of fossils in sedimentary rocks. As organisms exist during the same period throughout the world, their presence or (sometimes) absence provides a relative age of the formations where they appear. Based on principles that William Smith laid out almost a hundred years before the publication of Charles Darwin's theory of evolution, the principles of succession developed independently of evolutionary thought. The principle becomes quite complex, however, given the uncertainties of fossilization, localization of fossil types due to lateral changes in habitat (facies change in sedimentary strata), and that not all fossils formed globally at the same time.
Absolute dating
Geologists also use methods to determine the absolute age of rock samples and geological events. These dates are useful on their own and may also be used in conjunction with relative dating methods or to calibrate relative methods.
At the beginning of the 20th century, advancement in geological science was facilitated by the ability to obtain accurate absolute dates to geological events using radioactive isotopes and other methods. This changed the understanding of geological time. Previously, geologists could only use fossils and stratigraphic correlation to date sections of rock relative to one another. With isotopic dates, it became possible to assign absolute ages to rock units, and these absolute dates could be applied to fossil sequences in which there was datable material, converting the old relative ages into new absolute ages.
For many geological applications, isotope ratios of radioactive elements are measured in minerals that give the amount of time that has passed since a rock passed through its particular closure temperature, the point at which different radiometric isotopes stop diffusing into and out of the crystal lattice. These are used in geochronologic and thermochronologic studies. Common methods include uranium–lead dating, potassium–argon dating, argon–argon dating and uranium–thorium dating. These methods are used for a variety of applications. Dating of lava and volcanic ash layers found within a stratigraphic sequence can provide absolute age data for sedimentary rock units that do not contain radioactive isotopes and calibrate relative dating techniques. These methods can also be used to determine ages of pluton emplacement.
Thermochemical techniques can be used to determine temperature profiles within the crust, the uplift of mountain ranges, and paleo-topography.
Fractionation of the lanthanide series elements is used to compute ages since rocks were removed from the mantle.
Other methods are used for more recent events. Optically stimulated luminescence and cosmogenic radionuclide dating are used to date surfaces and/or erosion rates. Dendrochronology can also be used for the dating of landscapes. Radiocarbon dating is used for geologically young materials containing organic carbon.
Geological development of an area
The geology of an area changes through time as rock units are deposited and inserted, and deformational processes alter their shapes and locations.
Rock units are first emplaced either by deposition onto the surface or intrusion into the overlying rock. Deposition can occur when sediments settle onto the surface of the Earth and later lithify into sedimentary rock, or when as volcanic material such as volcanic ash or lava flows blanket the surface. Igneous intrusions such as batholiths, laccoliths, dikes, and sills, push upwards into the overlying rock, and crystallize as they intrude.
After the initial sequence of rocks has been deposited, the rock units can be deformed and/or metamorphosed. Deformation typically occurs as a result of horizontal shortening, horizontal extension, or side-to-side (strike-slip) motion. These structural regimes broadly relate to convergent boundaries, divergent boundaries, and transform boundaries, respectively, between tectonic plates.
When rock units are placed under horizontal compression, they shorten and become thicker. Because rock units, other than muds, do not significantly change in volume, this is accomplished in two primary ways: through faulting and folding. In the shallow crust, where brittle deformation can occur, thrust faults form, which causes the deeper rock to move on top of the shallower rock. Because deeper rock is often older, as noted by the principle of superposition, this can result in older rocks moving on top of younger ones. Movement along faults can result in folding, either because the faults are not planar or because rock layers are dragged along, forming drag folds as slip occurs along the fault. Deeper in the Earth, rocks behave plastically and fold instead of faulting. These folds can either be those where the material in the center of the fold buckles upwards, creating "antiforms", or where it buckles downwards, creating "synforms". If the tops of the rock units within the folds remain pointing upwards, they are called anticlines and synclines, respectively. If some of the units in the fold are facing downward, the structure is called an overturned anticline or syncline, and if all of the rock units are overturned or the correct up-direction is unknown, they are simply called by the most general terms, antiforms, and synforms.
Even higher pressures and temperatures during horizontal shortening can cause both folding and metamorphism of the rocks. This metamorphism causes changes in the mineral composition of the rocks; creates a foliation, or planar surface, that is related to mineral growth under stress. This can remove signs of the original textures of the rocks, such as bedding in sedimentary rocks, flow features of lavas, and crystal patterns in crystalline rocks.
Extension causes the rock units as a whole to become longer and thinner. This is primarily accomplished through normal faulting and through the ductile stretching and thinning. Normal faults drop rock units that are higher below those that are lower. This typically results in younger units ending up below older units. Stretching of units can result in their thinning. In fact, at one location within the Maria Fold and Thrust Belt, the entire sedimentary sequence of the Grand Canyon appears over a length of less than a meter. Rocks at the depth to be ductilely stretched are often also metamorphosed. These stretched rocks can also pinch into lenses, known as boudins, after the French word for "sausage" because of their visual similarity.
Where rock units slide past one another, strike-slip faults develop in shallow regions, and become shear zones at deeper depths where the rocks deform ductilely.
The addition of new rock units, both depositionally and intrusively, often occurs during deformation. Faulting and other deformational processes result in the creation of topographic gradients, causing material on the rock unit that is increasing in elevation to be eroded by hillslopes and channels. These sediments are deposited on the rock unit that is going down. Continual motion along the fault maintains the topographic gradient in spite of the movement of sediment and continues to create accommodation space for the material to deposit. Deformational events are often also associated with volcanism and igneous activity. Volcanic ashes and lavas accumulate on the surface, and igneous intrusions enter from below. Dikes, long, planar igneous intrusions, enter along cracks, and therefore often form in large numbers in areas that are being actively deformed. This can result in the emplacement of dike swarms, such as those that are observable across the Canadian shield, or rings of dikes around the lava tube of a volcano.
All of these processes do not necessarily occur in a single environment and do not necessarily occur in a single order. The Hawaiian Islands, for example, consist almost entirely of layered basaltic lava flows. The sedimentary sequences of the mid-continental United States and the Grand Canyon in the southwestern United States contain almost-undeformed stacks of sedimentary rocks that have remained in place since Cambrian time. Other areas are much more geologically complex. In the southwestern United States, sedimentary, volcanic, and intrusive rocks have been metamorphosed, faulted, foliated, and folded. Even older rocks, such as the Acasta gneiss of the Slave craton in northwestern Canada, the oldest known rock in the world have been metamorphosed to the point where their origin is indiscernible without laboratory analysis. In addition, these processes can occur in stages. In many places, the Grand Canyon in the southwestern United States being a very visible example, the lower rock units were metamorphosed and deformed, and then deformation ended and the upper, undeformed units were deposited. Although any amount of rock emplacement and rock deformation can occur, and they can occur any number of times, these concepts provide a guide to understanding the geological history of an area.
Investigative methods
Geologists use a number of fields, laboratory, and numerical modeling methods to decipher Earth history and to understand the processes that occur on and inside the Earth. In typical geological investigations, geologists use primary information related to petrology (the study of rocks), stratigraphy (the study of sedimentary layers), and structural geology (the study of positions of rock units and their deformation). In many cases, geologists also study modern soils, rivers, landscapes, and glaciers; investigate past and current life and biogeochemical pathways, and use geophysical methods to investigate the subsurface. Sub-specialities of geology may distinguish endogenous and exogenous geology.
Field methods
Geological field work varies depending on the task at hand. Typical fieldwork could consist of:
Geological mapping
Structural mapping: identifying the locations of major rock units and the faults and folds that led to their placement there.
Stratigraphic mapping: pinpointing the locations of sedimentary facies (lithofacies and biofacies) or the mapping of isopachs of equal thickness of sedimentary rock
Surficial mapping: recording the locations of soils and surficial deposits
Surveying of topographic features
compilation of topographic maps
Work to understand change across landscapes, including:
Patterns of erosion and deposition
River-channel change through migration and avulsion
Hillslope processes
Subsurface mapping through geophysical methods
These methods include:
Shallow seismic surveys
Ground-penetrating radar
Aeromagnetic surveys
Electrical resistivity tomography
They aid in:
Hydrocarbon exploration
Finding groundwater
Locating buried archaeological artifacts
High-resolution stratigraphy
Measuring and describing stratigraphic sections on the surface
Well drilling and logging
Biogeochemistry and geomicrobiology
Collecting samples to:
determine biochemical pathways
identify new species of organisms
identify new chemical compounds
and to use these discoveries to:
understand early life on Earth and how it functioned and metabolized
find important compounds for use in pharmaceuticals
Paleontology: excavation of fossil material
For research into past life and evolution
For museums and education
Collection of samples for geochronology and thermochronology
Glaciology: measurement of characteristics of glaciers and their motion
Petrology
In addition to identifying rocks in the field (lithology), petrologists identify rock samples in the laboratory. Two of the primary methods for identifying rocks in the laboratory are through optical microscopy and by using an electron microprobe. In an optical mineralogy analysis, petrologists analyze thin sections of rock samples using a petrographic microscope, where the minerals can be identified through their different properties in plane-polarized and cross-polarized light, including their birefringence, pleochroism, twinning, and interference properties with a conoscopic lens. In the electron microprobe, individual locations are analyzed for their exact chemical compositions and variation in composition within individual crystals. Stable and radioactive isotope studies provide insight into the geochemical evolution of rock units.
Petrologists can also use fluid inclusion data and perform high temperature and pressure physical experiments to understand the temperatures and pressures at which different mineral phases appear, and how they change through igneous and metamorphic processes. This research can be extrapolated to the field to understand metamorphic processes and the conditions of crystallization of igneous rocks. This work can also help to explain processes that occur within the Earth, such as subduction and magma chamber evolution.
Structural geology
Structural geologists use microscopic analysis of oriented thin sections of geological samples to observe the fabric within the rocks, which gives information about strain within the crystalline structure of the rocks. They also plot and combine measurements of geological structures to better understand the orientations of faults and folds to reconstruct the history of rock deformation in the area. In addition, they perform analog and numerical experiments of rock deformation in large and small settings.
The analysis of structures is often accomplished by plotting the orientations of various features onto stereonets. A stereonet is a stereographic projection of a sphere onto a plane, in which planes are projected as lines and lines are projected as points. These can be used to find the locations of fold axes, relationships between faults, and relationships between other geological structures.
Among the most well-known experiments in structural geology are those involving orogenic wedges, which are zones in which mountains are built along convergent tectonic plate boundaries. In the analog versions of these experiments, horizontal layers of sand are pulled along a lower surface into a back stop, which results in realistic-looking patterns of faulting and the growth of a critically tapered (all angles remain the same) orogenic wedge. Numerical models work in the same way as these analog models, though they are often more sophisticated and can include patterns of erosion and uplift in the mountain belt. This helps to show the relationship between erosion and the shape of a mountain range. These studies can also give useful information about pathways for metamorphism through pressure, temperature, space, and time.
Stratigraphy
In the laboratory, stratigraphers analyze samples of stratigraphic sections that can be returned from the field, such as those from drill cores. Stratigraphers also analyze data from geophysical surveys that show the locations of stratigraphic units in the subsurface. Geophysical data and well logs can be combined to produce a better view of the subsurface, and stratigraphers often use computer programs to do this in three dimensions. Stratigraphers can then use these data to reconstruct ancient processes occurring on the surface of the Earth, interpret past environments, and locate areas for water, coal, and hydrocarbon extraction.
In the laboratory, biostratigraphers analyze rock samples from outcrop and drill cores for the fossils found in them. These fossils help scientists to date the core and to understand the depositional environment in which the rock units formed. Geochronologists precisely date rocks within the stratigraphic section to provide better absolute bounds on the timing and rates of deposition.
Magnetic stratigraphers look for signs of magnetic reversals in igneous rock units within the drill cores. Other scientists perform stable-isotope studies on the rocks to gain information about past climate.
Planetary geology
With the advent of space exploration in the twentieth century, geologists have begun to look at other planetary bodies in the same ways that have been developed to study the Earth. This new field of study is called planetary geology (sometimes known as astrogeology) and relies on known geological principles to study other bodies of the solar system. This is a major aspect of planetary science, and largely focuses on the terrestrial planets, icy moons, asteroids, comets, and meteorites. However, some planetary geophysicists study the giant planets and exoplanets.
Although the Greek-language-origin prefix geo refers to Earth, "geology" is often used in conjunction with the names of other planetary bodies when describing their composition and internal processes: examples are "the geology of Mars" and "Lunar geology". Specialized terms such as selenology (studies of the Moon), areology (of Mars), etc., are also in use.
Although planetary geologists are interested in studying all aspects of other planets, a significant focus is to search for evidence of past or present life on other worlds. This has led to many missions whose primary or ancillary purpose is to examine planetary bodies for evidence of life. One of these is the Phoenix lander, which analyzed Martian polar soil for water, chemical, and mineralogical constituents related to biological processes.
Applied geology
Economic geology
Economic geology is a branch of geology that deals with aspects of economic minerals that humankind uses to fulfill various needs. Economic minerals are those extracted profitably for various practical uses. Economic geologists help locate and manage the Earth's natural resources, such as petroleum and coal, as well as mineral resources, which include metals such as iron, copper, and uranium.
Mining geology
Mining geology consists of the extractions of mineral and ore resources from the Earth. Some resources of economic interests include gemstones, metals such as gold and copper, and many minerals such as asbestos, Magnesite, perlite, mica, phosphates, zeolites, clay, pumice, quartz, and silica, as well as elements such as sulfur, chlorine, and helium.
Petroleum geology
Petroleum geologists study the locations of the subsurface of the Earth that can contain extractable hydrocarbons, especially petroleum and natural gas. Because many of these reservoirs are found in sedimentary basins, they study the formation of these basins, as well as their sedimentary and tectonic evolution and the present-day positions of the rock units.
Engineering geology
Engineering geology is the application of geological principles to engineering practice for the purpose of assuring that the geological factors affecting the location, design, construction, operation, and maintenance of engineering works are properly addressed. Engineering geology is distinct from geological engineering, particularly in North America.
In the field of civil engineering, geological principles and analyses are used in order to ascertain the mechanical principles of the material on which structures are built. This allows tunnels to be built without collapsing, bridges and skyscrapers to be built with sturdy foundations, and buildings to be built that will not settle in clay and mud.
Hydrology
Geology and geological principles can be applied to various environmental problems such as stream restoration, the restoration of brownfields, and the understanding of the interaction between natural habitat and the geological environment. Groundwater hydrology, or hydrogeology, is used to locate groundwater, which can often provide a ready supply of uncontaminated water and is especially important in arid regions, and to monitor the spread of contaminants in groundwater wells.
Paleoclimatology
Geologists also obtain data through stratigraphy, boreholes, core samples, and ice cores. Ice cores and sediment cores are used for paleoclimate reconstructions, which tell geologists about past and present temperature, precipitation, and sea level across the globe. These datasets are our primary source of information on global climate change outside of instrumental data.
Natural hazards
Geologists and geophysicists study natural hazards in order to enact safe building codes and warning systems that are used to prevent loss of property and life. Examples of important natural hazards that are pertinent to geology (as opposed those that are mainly or only pertinent to meteorology) are:
History
The study of the physical material of the Earth dates back at least to ancient Greece when Theophrastus (372–287 BCE) wrote the work Peri Lithon (On Stones). During the Roman period, Pliny the Elder wrote in detail of the many minerals and metals, then in practical use – even correctly noting the origin of amber. Additionally, in the 4th century BCE Aristotle made critical observations of the slow rate of geological change. He observed the composition of the land and formulated a theory where the Earth changes at a slow rate and that these changes cannot be observed during one person's lifetime. Aristotle developed one of the first evidence-based concepts connected to the geological realm regarding the rate at which the Earth physically changes.
Abu al-Rayhan al-Biruni (973–1048 CE) was one of the earliest Persian geologists, whose works included the earliest writings on the geology of India, hypothesizing that the Indian subcontinent was once a sea. Drawing from Greek and Indian scientific literature that were not destroyed by the Muslim conquests, the Persian scholar Ibn Sina (Avicenna, 981–1037) proposed detailed explanations for the formation of mountains, the origin of earthquakes, and other topics central to modern geology, which provided an essential foundation for the later development of the science. In China, the polymath Shen Kuo (1031–1095) formulated a hypothesis for the process of land formation: based on his observation of fossil animal shells in a geological stratum in a mountain hundreds of miles from the ocean, he inferred that the land was formed by the erosion of the mountains and by deposition of silt.
Georgius Agricola (1494–1555) published his groundbreaking work De Natura Fossilium in 1546 and is seen as the founder of geology as a scientific discipline.
Nicolas Steno (1638–1686) is credited with the law of superposition, the principle of original horizontality, and the principle of lateral continuity: three defining principles of stratigraphy.
The word geology was first used by Ulisse Aldrovandi in 1603, then by Jean-André Deluc in 1778 and introduced as a fixed term by Horace-Bénédict de Saussure in 1779. The word is derived from the Greek γῆ, gê, meaning "earth" and λόγος, logos, meaning "speech". But according to another source, the word "geology" comes from a Norwegian, Mikkel Pedersøn Escholt (1600–1669), who was a priest and scholar. Escholt first used the definition in his book titled, Geologia Norvegica (1657).
William Smith (1769–1839) drew some of the first geological maps and began the process of ordering rock strata (layers) by examining the fossils contained in them.
In 1763, Mikhail Lomonosov published his treatise On the Strata of Earth. His work was the first narrative of modern geology, based on the unity of processes in time and explanation of the Earth's past from the present.
James Hutton (1726–1797) is often viewed as the first modern geologist. In 1785 he presented a paper entitled Theory of the Earth to the Royal Society of Edinburgh. In his paper, he explained his theory that the Earth must be much older than had previously been supposed to allow enough time for mountains to be eroded and for sediments to form new rocks at the bottom of the sea, which in turn were raised up to become dry land. Hutton published a two-volume version of his ideas in 1795.
Followers of Hutton were known as Plutonists because they believed that some rocks were formed by vulcanism, which is the deposition of lava from volcanoes, as opposed to the Neptunists, led by Abraham Werner, who believed that all rocks had settled out of a large ocean whose level gradually dropped over time.
The first geological map of the U.S. was produced in 1809 by William Maclure. In 1807, Maclure commenced the self-imposed task of making a geological survey of the United States. Almost every state in the Union was traversed and mapped by him, the Allegheny Mountains being crossed and recrossed some 50 times. The results of his unaided labours were submitted to the American Philosophical Society in a memoir entitled Observations on the Geology of the United States explanatory of a Geological Map, and published in the Society's Transactions, together with the nation's first geological map. This antedates William Smith's geological map of England by six years, although it was constructed using a different classification of rocks.
Sir Charles Lyell (1797–1875) first published his famous book, Principles of Geology, in 1830. This book, which influenced the thought of Charles Darwin, successfully promoted the doctrine of uniformitarianism. This theory states that slow geological processes have occurred throughout the Earth's history and are still occurring today. In contrast, catastrophism is the theory that Earth's features formed in single, catastrophic events and remained unchanged thereafter. Though Hutton believed in uniformitarianism, the idea was not widely accepted at the time.
Much of 19th-century geology revolved around the question of the Earth's exact age. Estimates varied from a few hundred thousand to billions of years. By the early 20th century, radiometric dating allowed the Earth's age to be estimated at two billion years. The awareness of this vast amount of time opened the door to new theories about the processes that shaped the planet.
Some of the most significant advances in 20th-century geology have been the development of the theory of plate tectonics in the 1960s and the refinement of estimates of the planet's age. Plate tectonics theory arose from two separate geological observations: seafloor spreading and continental drift. The theory revolutionized the Earth sciences. Today the Earth is known to be approximately 4.5 billion years old.
Fields or related disciplines
Earth system science
Economic geology
Mining geology
Petroleum geology
Engineering geology
Environmental geology
Environmental science
Geoarchaeology
Geochemistry
Biogeochemistry
Isotope geochemistry
Geochronology
Geodetics
Geography
Physical geography
Technical geography
Geological engineering
Geological modelling
Geometallurgy
Geomicrobiology
Geomorphology
Geomythology
Geophysics
Glaciology
Historical geology
Hydrogeology
Meteorology
Mineralogy
Oceanography
Marine geology
Paleoclimatology
Paleontology
Micropaleontology
Palynology
Petrology
Petrophysics
Planetary geology
Plate tectonics
Regional geology
Sedimentology
Seismology
Soil science
Pedology (soil study)
Speleology
Stratigraphy
Biostratigraphy
Chronostratigraphy
Lithostratigraphy
Structural geology
Systems geology
Tectonics
Volcanology
See also
List of individual rocks
References
External links
One Geology: This interactive geological map of the world is an international initiative of the geological surveys around the globe. This groundbreaking project was launched in 2007 and contributed to the 'International Year of Planet Earth', becoming one of their flagship projects.
Earth Science News, Maps, Dictionary, Articles, Jobs
American Geophysical Union
American Geosciences Institute
European Geosciences Union
European Federation of Geologists
Geological Society of America
Geological Society of London
Video-interviews with famous geologists
Geology OpenTextbook
Chronostratigraphy benchmarks
The principles and objects of geology, with special reference to the geology of Egypt (1911), W. F. Hume | 0.789345 | 0.998425 | 0.788102 |
Homogeneity and heterogeneity | Homogeneity and heterogeneity are concepts relating to the uniformity of a substance, process or image. A homogeneous feature is uniform in composition or character (i.e. color, shape, size, weight, height, distribution, texture, language, income, disease, temperature, radioactivity, architectural design, etc.); one that is heterogeneous is distinctly nonuniform in at least one of these qualities.
Etymology and spelling
The words homogeneous and heterogeneous come from Medieval Latin homogeneus and heterogeneus, from Ancient Greek ὁμογενής (homogenēs) and ἑτερογενής (heterogenēs), from ὁμός (homos, "same") and ἕτερος (heteros, "other, another, different") respectively, followed by γένος (genos, "kind"); -ous is an adjectival suffix.
Alternate spellings omitting the last -e- (and the associated pronunciations) are common, but mistaken: homogenous is strictly a biological/pathological term which has largely been replaced by homologous. But use of homogenous to mean homogeneous has seen a rise since 2000, enough for it to now be considered an "established variant". Similarly, heterogenous is a spelling traditionally reserved to biology and pathology, referring to the property of an object in the body having its origin outside the body.
Scaling
The concepts are the same to every level of complexity. From atoms to galaxies, plants, animals, humans, and other living organisms all share both a common or unique set of complexities.
Hence, an element may be homogeneous on a larger scale, compared to being heterogeneous on a smaller scale. This is known as an effective medium approximation.
Examples
Various disciplines understand heterogeneity, or being heterogeneous, in different ways.
Biology
Environmental heterogeneity
Environmental heterogeneity (EH) is a hypernym for different environmental factors that contribute to the diversity of species, like climate, topography, and land cover. Biodiversity is correlated with geodiversity on a global scale. Heterogeneity in geodiversity features and environmental variables are indicators of environmental heterogeneity. They drive biodiversity at local and regional scales.
Scientific literature in ecology contains a big number of different terms for environmental heterogeneity, often undefined or conflicting in their meaning. and are a synonyms of environmental heterogeneity.
Chemistry
Homogeneous and heterogeneous mixtures
In chemistry, a heterogeneous mixture consists of either or both of 1) multiple states of matter or 2) hydrophilic and hydrophobic substances in one mixture; an example of the latter would be a mixture of water, octane, and silicone grease. Heterogeneous solids, liquids, and gases may be made homogeneous by melting, stirring, or by allowing time to pass for diffusion to distribute the molecules evenly. For example, adding dye to water will create a heterogeneous solution at first, but will become homogeneous over time. Entropy allows for heterogeneous substances to become homogeneous over time.
A heterogeneous mixture is a mixture of two or more compounds. Examples are: mixtures of sand and water or sand and iron filings, a conglomerate rock, water and oil, a salad, trail mix, and concrete (not cement). A mixture can be determined to be homogeneous when everything is settled and equal, and the liquid, gas, the object is one color or the same form. Various models have been proposed to model the concentrations in different phases. The phenomena to be considered are mass rates and reaction.
Homogeneous and heterogeneous reactions
Homogeneous reactions are chemical reactions in which the reactants and products are in the same phase, while heterogeneous reactions have reactants in two or more phases. Reactions that take place on the surface of a catalyst of a different phase are also heterogeneous. A reaction between two gases or two miscible liquids is homogeneous. A reaction between a gas and a liquid, a gas and a solid or a liquid and a solid is heterogeneous.
Geology
Earth is a heterogeneous substance in many aspects; for instance, rocks (geology) are inherently heterogeneous, usually occurring at the micro-scale and mini-scale.
Linguistics
In formal semantics, homogeneity is the phenomenon in which plural expressions imply "all" when asserted but "none" when negated. For example, the English sentence "Robin read the books" means that Robin read all the books, while "Robin didn't read the books" means that she read none of them. Neither sentence can be asserted if Robin read exactly half of the books. This is a puzzle because the negative sentence does not appear to be the classical negation of the sentence. A variety of explanations have been proposed including that natural language operates on a trivalent logic.
Information technology
With information technology, heterogeneous computing occurs in a network comprising different types of computers, potentially with vastly differing memory sizes, processing power and even basic underlying architecture.
Mathematics and statistics
In algebra, homogeneous polynomials have the same number of factors of a given kind.
In the study of binary relations, a homogeneous relation R is on a single set (R ⊆ X × X) while a heterogeneous relation concerns possibly distinct sets (R ⊆ X × Y, X = Y or X ≠ Y).
In statistical meta-analysis, study heterogeneity is when multiple studies on an effect are measuring somewhat different effects due to differences in subject population, intervention, choice of analysis, experimental design, etc.; this can cause problems in attempts to summarize the meaning of the studies.
Medicine
In medicine and genetics, a genetic or allelic heterogeneous condition is one where the same disease or condition can be caused, or contributed to, by several factors, or in genetic terms, by varying or different genes or alleles.
In cancer research, cancer cell heterogeneity is thought to be one of the underlying reasons that make treatment of cancer difficult.
Physics
In physics, "heterogeneous" is understood to mean "having physical properties that vary within the medium".
Sociology
In sociology, "heterogeneous" may refer to a society or group that includes individuals of differing ethnicities, cultural backgrounds, sexes, or ages. Diverse is the more common synonym in the context.
See also
Complete spatial randomness
Heterologous
Epidemiology
Spatial analysis
Statistical hypothesis testing
Homogeneity blockmodeling
References
External links
The following cited pages in this book cover the meaning of "homogeneity" across disciplines:
Chemical reactions
Scientific terminology
de:Heterogenität
eu:Homogeneo eta heterogeneo | 0.792225 | 0.994774 | 0.788085 |
Methodology | In its most common sense, methodology is the study of research methods. However, the term can also refer to the methods themselves or to the philosophical discussion of associated background assumptions. A method is a structured procedure for bringing about a certain goal, like acquiring knowledge or verifying knowledge claims. This normally involves various steps, like choosing a sample, collecting data from this sample, and interpreting the data. The study of methods concerns a detailed description and analysis of these processes. It includes evaluative aspects by comparing different methods. This way, it is assessed what advantages and disadvantages they have and for what research goals they may be used. These descriptions and evaluations depend on philosophical background assumptions. Examples are how to conceptualize the studied phenomena and what constitutes evidence for or against them. When understood in the widest sense, methodology also includes the discussion of these more abstract issues.
Methodologies are traditionally divided into quantitative and qualitative research. Quantitative research is the main methodology of the natural sciences. It uses precise numerical measurements. Its goal is usually to find universal laws used to make predictions about future events. The dominant methodology in the natural sciences is called the scientific method. It includes steps like observation and the formulation of a hypothesis. Further steps are to test the hypothesis using an experiment, to compare the measurements to the expected results, and to publish the findings.
Qualitative research is more characteristic of the social sciences and gives less prominence to exact numerical measurements. It aims more at an in-depth understanding of the meaning of the studied phenomena and less at universal and predictive laws. Common methods found in the social sciences are surveys, interviews, focus groups, and the nominal group technique. They differ from each other concerning their sample size, the types of questions asked, and the general setting. In recent decades, many social scientists have started using mixed-methods research, which combines quantitative and qualitative methodologies.
Many discussions in methodology concern the question of whether the quantitative approach is superior, especially whether it is adequate when applied to the social domain. A few theorists reject methodology as a discipline in general. For example, some argue that it is useless since methods should be used rather than studied. Others hold that it is harmful because it restricts the freedom and creativity of researchers. Methodologists often respond to these objections by claiming that a good methodology helps researchers arrive at reliable theories in an efficient way. The choice of method often matters since the same factual material can lead to different conclusions depending on one's method. Interest in methodology has risen in the 20th century due to the increased importance of interdisciplinary work and the obstacles hindering efficient cooperation.
Definitions
The term "methodology" is associated with a variety of meanings. In its most common usage, it refers either to a method, to the field of inquiry studying methods, or to philosophical discussions of background assumptions involved in these processes. Some researchers distinguish methods from methodologies by holding that methods are modes of data collection while methodologies are more general research strategies that determine how to conduct a research project. In this sense, methodologies include various theoretical commitments about the intended outcomes of the investigation.
As method
The term "methodology" is sometimes used as a synonym for the term "method". A method is a way of reaching some predefined goal. It is a planned and structured procedure for solving a theoretical or practical problem. In this regard, methods stand in contrast to free and unstructured approaches to problem-solving. For example, descriptive statistics is a method of data analysis, radiocarbon dating is a method of determining the age of organic objects, sautéing is a method of cooking, and project-based learning is an educational method. The term "technique" is often used as a synonym both in the academic and the everyday discourse. Methods usually involve a clearly defined series of decisions and actions to be used under certain circumstances, usually expressable as a sequence of repeatable instructions. The goal of following the steps of a method is to bring about the result promised by it. In the context of inquiry, methods may be defined as systems of rules and procedures to discover regularities of nature, society, and thought. In this sense, methodology can refer to procedures used to arrive at new knowledge or to techniques of verifying and falsifying pre-existing knowledge claims. This encompasses various issues pertaining both to the collection of data and their analysis. Concerning the collection, it involves the problem of sampling and of how to go about the data collection itself, like surveys, interviews, or observation. There are also numerous methods of how the collected data can be analyzed using statistics or other ways of interpreting it to extract interesting conclusions.
As study of methods
However, many theorists emphasize the differences between the terms "method" and "methodology". In this regard, methodology may be defined as "the study or description of methods" or as "the analysis of the principles of methods, rules, and postulates employed by a discipline". This study or analysis involves uncovering assumptions and practices associated with the different methods and a detailed description of research designs and hypothesis testing. It also includes evaluative aspects: forms of data collection, measurement strategies, and ways to analyze data are compared and their advantages and disadvantages relative to different research goals and situations are assessed. In this regard, methodology provides the skills, knowledge, and practical guidance needed to conduct scientific research in an efficient manner. It acts as a guideline for various decisions researchers need to take in the scientific process.
Methodology can be understood as the middle ground between concrete particular methods and the abstract and general issues discussed by the philosophy of science. In this regard, methodology comes after formulating a research question and helps the researchers decide what methods to use in the process. For example, methodology should assist the researcher in deciding why one method of sampling is preferable to another in a particular case or which form of data analysis is likely to bring the best results. Methodology achieves this by explaining, evaluating and justifying methods. Just as there are different methods, there are also different methodologies. Different methodologies provide different approaches to how methods are evaluated and explained and may thus make different suggestions on what method to use in a particular case.
According to Aleksandr Georgievich Spirkin, "[a] methodology is a system of principles and general ways of organising and structuring theoretical and practical activity, and also the theory of this system". Helen Kara defines methodology as "a contextual framework for research, a coherent and logical scheme based on views, beliefs, and values, that guides the choices researchers make". Ginny E. Garcia and Dudley L. Poston understand methodology either as a complex body of rules and postulates guiding research or as the analysis of such rules and procedures. As a body of rules and postulates, a methodology defines the subject of analysis as well as the conceptual tools used by the analysis and the limits of the analysis. Research projects are usually governed by a structured procedure known as the research process. The goal of this process is given by a research question, which determines what kind of information one intends to acquire.
As discussion of background assumptions
Some theorists prefer an even wider understanding of methodology that involves not just the description, comparison, and evaluation of methods but includes additionally more general philosophical issues. One reason for this wider approach is that discussions of when to use which method often take various background assumptions for granted, for example, concerning the goal and nature of research. These assumptions can at times play an important role concerning which method to choose and how to follow it. For example, Thomas Kuhn argues in his The Structure of Scientific Revolutions that sciences operate within a framework or a paradigm that determines which questions are asked and what counts as good science. This concerns philosophical disagreements both about how to conceptualize the phenomena studied, what constitutes evidence for and against them, and what the general goal of researching them is. So in this wider sense, methodology overlaps with philosophy by making these assumptions explicit and presenting arguments for and against them. According to C. S. Herrman, a good methodology clarifies the structure of the data to be analyzed and helps the researchers see the phenomena in a new light. In this regard, a methodology is similar to a paradigm. A similar view is defended by Spirkin, who holds that a central aspect of every methodology is the world view that comes with it.
The discussion of background assumptions can include metaphysical and ontological issues in cases where they have important implications for the proper research methodology. For example, a realist perspective considering the observed phenomena as an external and independent reality is often associated with an emphasis on empirical data collection and a more distanced and objective attitude. Idealists, on the other hand, hold that external reality is not fully independent of the mind and tend, therefore, to include more subjective tendencies in the research process as well.
For the quantitative approach, philosophical debates in methodology include the distinction between the inductive and the hypothetico-deductive interpretation of the scientific method. For qualitative research, many basic assumptions are tied to philosophical positions such as hermeneutics, pragmatism, Marxism, critical theory, and postmodernism. According to Kuhn, an important factor in such debates is that the different paradigms are incommensurable. This means that there is no overarching framework to assess the conflicting theoretical and methodological assumptions. This critique puts into question various presumptions of the quantitative approach associated with scientific progress based on the steady accumulation of data.
Other discussions of abstract theoretical issues in the philosophy of science are also sometimes included. This can involve questions like how and whether scientific research differs from fictional writing as well as whether research studies objective facts rather than constructing the phenomena it claims to study. In the latter sense, some methodologists have even claimed that the goal of science is less to represent a pre-existing reality and more to bring about some kind of social change in favor of repressed groups in society.
Related terms and issues
Viknesh Andiappan and Yoke Kin Wan use the field of process systems engineering to distinguish the term "methodology" from the closely related terms "approach", "method", "procedure", and "technique". On their view, "approach" is the most general term. It can be defined as "a way or direction used to address a problem based on a set of assumptions". An example is the difference between hierarchical approaches, which consider one task at a time in a hierarchical manner, and concurrent approaches, which consider them all simultaneously. Methodologies are a little more specific. They are general strategies needed to realize an approach and may be understood as guidelines for how to make choices. Often the term "framework" is used as a synonym. A method is a still more specific way of practically implementing the approach. Methodologies provide the guidelines that help researchers decide which method to follow. The method itself may be understood as a sequence of techniques. A technique is a step taken that can be observed and measured. Each technique has some immediate result. The whole sequence of steps is termed a "procedure". A similar but less complex characterization is sometimes found in the field of language teaching, where the teaching process may be described through a three-level conceptualization based on "approach", "method", and "technique".
One question concerning the definition of methodology is whether it should be understood as a descriptive or a normative discipline. The key difference in this regard is whether methodology just provides a value-neutral description of methods or what scientists actually do. Many methodologists practice their craft in a normative sense, meaning that they express clear opinions about the advantages and disadvantages of different methods. In this regard, methodology is not just about what researchers actually do but about what they ought to do or how to perform good research.
Types
Theorists often distinguish various general types or approaches to methodology. The most influential classification contrasts quantitative and qualitative methodology.
Quantitative and qualitative
Quantitative research is closely associated with the natural sciences. It is based on precise numerical measurements, which are then used to arrive at exact general laws. This precision is also reflected in the goal of making predictions that can later be verified by other researchers. Examples of quantitative research include physicists at the Large Hadron Collider measuring the mass of newly created particles and positive psychologists conducting an online survey to determine the correlation between income and self-assessed well-being.
Qualitative research is characterized in various ways in the academic literature but there are very few precise definitions of the term. It is often used in contrast to quantitative research for forms of study that do not quantify their subject matter numerically. However, the distinction between these two types is not always obvious and various theorists have argued that it should be understood as a continuum and not as a dichotomy. A lot of qualitative research is concerned with some form of human experience or behavior, in which case it tends to focus on a few individuals and their in-depth understanding of the meaning of the studied phenomena. Examples of the qualitative method are a market researcher conducting a focus group in order to learn how people react to a new product or a medical researcher performing an unstructured in-depth interview with a participant from a new experimental therapy to assess its potential benefits and drawbacks. It is also used to improve quantitative research, such as informing data collection materials and questionnaire design. Qualitative research is frequently employed in fields where the pre-existing knowledge is inadequate. This way, it is possible to get a first impression of the field and potential theories, thus paving the way for investigating the issue in further studies.
Quantitative methods dominate in the natural sciences but both methodologies are used in the social sciences. Some social scientists focus mostly on one method while others try to investigate the same phenomenon using a variety of different methods. It is central to both approaches how the group of individuals used for the data collection is selected. This process is known as sampling. It involves the selection of a subset of individuals or phenomena to be measured. Important in this regard is that the selected samples are representative of the whole population, i.e. that no significant biases were involved when choosing. If this is not the case, the data collected does not reflect what the population as a whole is like. This affects generalizations and predictions drawn from the biased data. The number of individuals selected is called the sample size. For qualitative research, the sample size is usually rather small, while quantitative research tends to focus on big groups and collecting a lot of data. After the collection, the data needs to be analyzed and interpreted to arrive at interesting conclusions that pertain directly to the research question. This way, the wealth of information obtained is summarized and thus made more accessible to others. Especially in the case of quantitative research, this often involves the application of some form of statistics to make sense of the numerous individual measurements.
Many discussions in the history of methodology center around the quantitative methods used by the natural sciences. A central question in this regard is to what extent they can be applied to other fields, like the social sciences and history. The success of the natural sciences was often seen as an indication of the superiority of the quantitative methodology and used as an argument to apply this approach to other fields as well. However, this outlook has been put into question in the more recent methodological discourse. In this regard, it is often argued that the paradigm of the natural sciences is a one-sided development of reason, which is not equally well suited to all areas of inquiry. The divide between quantitative and qualitative methods in the social sciences is one consequence of this criticism.
Which method is more appropriate often depends on the goal of the research. For example, quantitative methods usually excel for evaluating preconceived hypotheses that can be clearly formulated and measured. Qualitative methods, on the other hand, can be used to study complex individual issues, often with the goal of formulating new hypotheses. This is especially relevant when the existing knowledge of the subject is inadequate. Important advantages of quantitative methods include precision and reliability. However, they have often difficulties in studying very complex phenomena that are commonly of interest to the social sciences. Additional problems can arise when the data is misinterpreted to defend conclusions that are not directly supported by the measurements themselves. In recent decades, many researchers in the social sciences have started combining both methodologies. This is known as mixed-methods research. A central motivation for this is that the two approaches can complement each other in various ways: some issues are ignored or too difficult to study with one methodology and are better approached with the other. In other cases, both approaches are applied to the same issue to produce more comprehensive and well-rounded results.
Qualitative and quantitative research are often associated with different research paradigms and background assumptions. Qualitative researchers often use an interpretive or critical approach while quantitative researchers tend to prefer a positivistic approach. Important disagreements between these approaches concern the role of objectivity and hard empirical data as well as the research goal of predictive success rather than in-depth understanding or social change.
Others
Various other classifications have been proposed. One distinguishes between substantive and formal methodologies. Substantive methodologies tend to focus on one specific area of inquiry. The findings are initially restricted to this specific field but may be transferrable to other areas of inquiry. Formal methodologies, on the other hand, are based on a variety of studies and try to arrive at more general principles applying to different fields. They may also give particular prominence to the analysis of the language of science and the formal structure of scientific explanation. A closely related classification distinguishes between philosophical, general scientific, and special scientific methods.
One type of methodological outlook is called "proceduralism". According to it, the goal of methodology is to boil down the research process to a simple set of rules or a recipe that automatically leads to good research if followed precisely. However, it has been argued that, while this ideal may be acceptable for some forms of quantitative research, it fails for qualitative research. One argument for this position is based on the claim that research is not a technique but a craft that cannot be achieved by blindly following a method. In this regard, research depends on forms of creativity and improvisation to amount to good science.
Other types include inductive, deductive, and transcendental methods. Inductive methods are common in the empirical sciences and proceed through inductive reasoning from many particular observations to arrive at general conclusions, often in the form of universal laws. Deductive methods, also referred to as axiomatic methods, are often found in formal sciences, such as geometry. They start from a set of self-evident axioms or first principles and use deduction to infer interesting conclusions from these axioms. Transcendental methods are common in Kantian and post-Kantian philosophy. They start with certain particular observations. It is then argued that the observed phenomena can only exist if their conditions of possibility are fulfilled. This way, the researcher may draw general psychological or metaphysical conclusions based on the claim that the phenomenon would not be observable otherwise.
Importance
It has been argued that a proper understanding of methodology is important for various issues in the field of research. They include both the problem of conducting efficient and reliable research as well as being able to validate knowledge claims by others. Method is often seen as one of the main factors of scientific progress. This is especially true for the natural sciences where the developments of experimental methods in the 16th and 17th century are often seen as the driving force behind the success and prominence of the natural sciences. In some cases, the choice of methodology may have a severe impact on a research project. The reason is that very different and sometimes even opposite conclusions may follow from the same factual material based on the chosen methodology.
Aleksandr Georgievich Spirkin argues that methodology, when understood in a wide sense, is of great importance since the world presents us with innumerable entities and relations between them. Methods are needed to simplify this complexity and find a way of mastering it. On the theoretical side, this concerns ways of forming true beliefs and solving problems. On the practical side, this concerns skills of influencing nature and dealing with each other. These different methods are usually passed down from one generation to the next. Spirkin holds that the interest in methodology on a more abstract level arose in attempts to formalize these techniques to improve them as well as to make it easier to use them and pass them on. In the field of research, for example, the goal of this process is to find reliable means to acquire knowledge in contrast to mere opinions acquired by unreliable means. In this regard, "methodology is a way of obtaining and building up ... knowledge".
Various theorists have observed that the interest in methodology has risen significantly in the 20th century. This increased interest is reflected not just in academic publications on the subject but also in the institutionalized establishment of training programs focusing specifically on methodology. This phenomenon can be interpreted in different ways. Some see it as a positive indication of the topic's theoretical and practical importance. Others interpret this interest in methodology as an excessive preoccupation that draws time and energy away from doing research on concrete subjects by applying the methods instead of researching them. This ambiguous attitude towards methodology is sometimes even exemplified in the same person. Max Weber, for example, criticized the focus on methodology during his time while making significant contributions to it himself. Spirkin believes that one important reason for this development is that contemporary society faces many global problems. These problems cannot be solved by a single researcher or a single discipline but are in need of collaborative efforts from many fields. Such interdisciplinary undertakings profit a lot from methodological advances, both concerning the ability to understand the methods of the respective fields and in relation to developing more homogeneous methods equally used by all of them.
Criticism
Most criticism of methodology is directed at one specific form or understanding of it. In such cases, one particular methodological theory is rejected but not methodology at large when understood as a field of research comprising many different theories. In this regard, many objections to methodology focus on the quantitative approach, specifically when it is treated as the only viable approach. Nonetheless, there are also more fundamental criticisms of methodology in general. They are often based on the idea that there is little value to abstract discussions of methods and the reasons cited for and against them. In this regard, it may be argued that what matters is the correct employment of methods and not their meticulous study. Sigmund Freud, for example, compared methodologists to "people who clean their glasses so thoroughly that they never have time to look through them". According to C. Wright Mills, the practice of methodology often degenerates into a "fetishism of method and technique".
Some even hold that methodological reflection is not just a waste of time but actually has negative side effects. Such an argument may be defended by analogy to other skills that work best when the agent focuses only on employing them. In this regard, reflection may interfere with the process and lead to avoidable mistakes. According to an example by Gilbert Ryle, "[w]e run, as a rule, worse, not better, if we think a lot about our feet". A less severe version of this criticism does not reject methodology per se but denies its importance and rejects an intense focus on it. In this regard, methodology has still a limited and subordinate utility but becomes a diversion or even counterproductive by hindering practice when given too much emphasis.
Another line of criticism concerns more the general and abstract nature of methodology. It states that the discussion of methods is only useful in concrete and particular cases but not concerning abstract guidelines governing many or all cases. Some anti-methodologists reject methodology based on the claim that researchers need freedom to do their work effectively. But this freedom may be constrained and stifled by "inflexible and inappropriate guidelines". For example, according to Kerry Chamberlain, a good interpretation needs creativity to be provocative and insightful, which is prohibited by a strictly codified approach. Chamberlain uses the neologism "methodolatry" to refer to this alleged overemphasis on methodology. Similar arguments are given in Paul Feyerabend's book "Against Method".
However, these criticisms of methodology in general are not always accepted. Many methodologists defend their craft by pointing out how the efficiency and reliability of research can be improved through a proper understanding of methodology.
A criticism of more specific forms of methodology is found in the works of the sociologist Howard S. Becker. He is quite critical of methodologists based on the claim that they usually act as advocates of one particular method usually associated with quantitative research. An often-cited quotation in this regard is that "[m]ethodology is too important to be left to methodologists". Alan Bryman has rejected this negative outlook on methodology. He holds that Becker's criticism can be avoided by understanding methodology as an inclusive inquiry into all kinds of methods and not as a mere doctrine for converting non-believers to one's preferred method.
In different fields
Part of the importance of methodology is reflected in the number of fields to which it is relevant. They include the natural sciences and the social sciences as well as philosophy and mathematics.
Natural sciences
The dominant methodology in the natural sciences (like astronomy, biology, chemistry, geoscience, and physics) is called the scientific method. Its main cognitive aim is usually seen as the creation of knowledge, but various closely related aims have also been proposed, like understanding, explanation, or predictive success. Strictly speaking, there is no one single scientific method. In this regard, the expression "scientific method" refers not to one specific procedure but to different general or abstract methodological aspects characteristic of all the aforementioned fields. Important features are that the problem is formulated in a clear manner and that the evidence presented for or against a theory is public, reliable, and replicable. The last point is important so that other researchers are able to repeat the experiments to confirm or disconfirm the initial study. For this reason, various factors and variables of the situation often have to be controlled to avoid distorting influences and to ensure that subsequent measurements by other researchers yield the same results. The scientific method is a quantitative approach that aims at obtaining numerical data. This data is often described using mathematical formulas. The goal is usually to arrive at some universal generalizations that apply not just to the artificial situation of the experiment but to the world at large. Some data can only be acquired using advanced measurement instruments. In cases where the data is very complex, it is often necessary to employ sophisticated statistical techniques to draw conclusions from it.
The scientific method is often broken down into several steps. In a typical case, the procedure starts with regular observation and the collection of information. These findings then lead the scientist to formulate a hypothesis describing and explaining the observed phenomena. The next step consists in conducting an experiment designed for this specific hypothesis. The actual results of the experiment are then compared to the expected results based on one's hypothesis. The findings may then be interpreted and published, either as a confirmation or disconfirmation of the initial hypothesis.
Two central aspects of the scientific method are observation and experimentation. This distinction is based on the idea that experimentation involves some form of manipulation or intervention. This way, the studied phenomena are actively created or shaped. For example, a biologist inserting viral DNA into a bacterium is engaged in a form of experimentation. Pure observation, on the other hand, involves studying independent entities in a passive manner. This is the case, for example, when astronomers observe the orbits of astronomical objects far away. Observation played the main role in ancient science. The scientific revolution in the 16th and 17th century affected a paradigm change that gave a much more central role to experimentation in the scientific methodology. This is sometimes expressed by stating that modern science actively "puts questions to nature". While the distinction is usually clear in the paradigmatic cases, there are also many intermediate cases where it is not obvious whether they should be characterized as observation or as experimentation.
A central discussion in this field concerns the distinction between the inductive and the hypothetico-deductive methodology. The core disagreement between these two approaches concerns their understanding of the confirmation of scientific theories. The inductive approach holds that a theory is confirmed or supported by all its positive instances, i.e. by all the observations that exemplify it. For example, the observations of many white swans confirm the universal hypothesis that "all swans are white". The hypothetico-deductive approach, on the other hand, focuses not on positive instances but on deductive consequences of the theory. This way, the researcher uses deduction before conducting an experiment to infer what observations they expect. These expectations are then compared to the observations they actually make. This approach often takes a negative form based on falsification. In this regard, positive instances do not confirm a hypothesis but negative instances disconfirm it. Positive indications that the hypothesis is true are only given indirectly if many attempts to find counterexamples have failed. A cornerstone of this approach is the null hypothesis, which assumes that there is no connection (see causality) between whatever is being observed. It is up to the researcher to do all they can to disprove their own hypothesis through relevant methods or techniques, documented in a clear and replicable process. If they fail to do so, it can be concluded that the null hypothesis is false, which provides support for their own hypothesis about the relation between the observed phenomena.
Social sciences
Significantly more methodological variety is found in the social sciences, where both quantitative and qualitative approaches are used. They employ various forms of data collection, such as surveys, interviews, focus groups, and the nominal group technique. Surveys belong to quantitative research and usually involve some form of questionnaire given to a large group of individuals. It is paramount that the questions are easily understandable by the participants since the answers might not have much value otherwise. Surveys normally restrict themselves to closed questions in order to avoid various problems that come with the interpretation of answers to open questions. They contrast in this regard to interviews, which put more emphasis on the individual participant and often involve open questions. Structured interviews are planned in advance and have a fixed set of questions given to each individual. They contrast with unstructured interviews, which are closer to a free-flow conversation and require more improvisation on the side of the interviewer for finding interesting and relevant questions. Semi-structured interviews constitute a middle ground: they include both predetermined questions and questions not planned in advance. Structured interviews make it easier to compare the responses of the different participants and to draw general conclusions. However, they also limit what may be discovered and thus constrain the investigation in many ways. Depending on the type and depth of the interview, this method belongs either to quantitative or to qualitative research. The terms research conversation and muddy interview have been used to describe interviews conducted in informal settings which may not occur purely for the purposes of data collection. Some researcher employ the go-along method by conducting interviews while they and the participants navigate through and engage with their environment.
Focus groups are a qualitative research method often used in market research. They constitute a form of group interview involving a small number of demographically similar people. Researchers can use this method to collect data based on the interactions and responses of the participants. The interview often starts by asking the participants about their opinions on the topic under investigation, which may, in turn, lead to a free exchange in which the group members express and discuss their personal views. An important advantage of focus groups is that they can provide insight into how ideas and understanding operate in a cultural context. However, it is usually difficult to use these insights to discern more general patterns true for a wider public. One advantage of focus groups is that they can help the researcher identify a wide range of distinct perspectives on the issue in a short time. The group interaction may also help clarify and expand interesting contributions. One disadvantage is due to the moderator's personality and group effects, which may influence the opinions stated by the participants. When applied to cross-cultural settings, cultural and linguistic adaptations and group composition considerations are important to encourage greater participation in the group discussion.
The nominal group technique is similar to focus groups with a few important differences. The group often consists of experts in the field in question. The group size is similar but the interaction between the participants is more structured. The goal is to determine how much agreement there is among the experts on the different issues. The initial responses are often given in written form by each participant without a prior conversation between them. In this manner, group effects potentially influencing the expressed opinions are minimized. In later steps, the different responses and comments may be discussed and compared to each other by the group as a whole.
Most of these forms of data collection involve some type of observation. Observation can take place either in a natural setting, i.e. the field, or in a controlled setting such as a laboratory. Controlled settings carry with them the risk of distorting the results due to their artificiality. Their advantage lies in precisely controlling the relevant factors, which can help make the observations more reliable and repeatable. Non-participatory observation involves a distanced or external approach. In this case, the researcher focuses on describing and recording the observed phenomena without causing or changing them, in contrast to participatory observation.
An important methodological debate in the field of social sciences concerns the question of whether they deal with hard, objective, and value-neutral facts, as the natural sciences do. Positivists agree with this characterization, in contrast to interpretive and critical perspectives on the social sciences. According to William Neumann, positivism can be defined as "an organized method for combining deductive logic with precise empirical observations of individual behavior in order to discover and confirm a set of probabilistic causal laws that can be used to predict general patterns of human activity". This view is rejected by interpretivists. Max Weber, for example, argues that the method of the natural sciences is inadequate for the social sciences. Instead, more importance is placed on meaning and how people create and maintain their social worlds. The critical methodology in social science is associated with Karl Marx and Sigmund Freud. It is based on the assumption that many of the phenomena studied using the other approaches are mere distortions or surface illusions. It seeks to uncover deeper structures of the material world hidden behind these distortions. This approach is often guided by the goal of helping people effect social changes and improvements.
Philosophy
Philosophical methodology is the metaphilosophical field of inquiry studying the methods used in philosophy. These methods structure how philosophers conduct their research, acquire knowledge, and select between competing theories. It concerns both descriptive issues of what methods have been used by philosophers in the past and normative issues of which methods should be used. Many philosophers emphasize that these methods differ significantly from the methods found in the natural sciences in that they usually do not rely on experimental data obtained through measuring equipment. Which method one follows can have wide implications for how philosophical theories are constructed, what theses are defended, and what arguments are cited in favor or against. In this regard, many philosophical disagreements have their source in methodological disagreements. Historically, the discovery of new methods, like methodological skepticism and the phenomenological method, has had important impacts on the philosophical discourse.
A great variety of methods has been employed throughout the history of philosophy. Methodological skepticism gives special importance to the role of systematic doubt. This way, philosophers try to discover absolutely certain first principles that are indubitable. The geometric method starts from such first principles and employs deductive reasoning to construct a comprehensive philosophical system based on them. Phenomenology gives particular importance to how things appear to be. It consists in suspending one's judgments about whether these things actually exist in the external world. This technique is known as epoché and can be used to study appearances independent of assumptions about their causes. The method of conceptual analysis came to particular prominence with the advent of analytic philosophy. It studies concepts by breaking them down into their most fundamental constituents to clarify their meaning. Common sense philosophy uses common and widely accepted beliefs as a philosophical tool. They are used to draw interesting conclusions. This is often employed in a negative sense to discredit radical philosophical positions that go against common sense. Ordinary language philosophy has a very similar method: it approaches philosophical questions by looking at how the corresponding terms are used in ordinary language.
Many methods in philosophy rely on some form of intuition. They are used, for example, to evaluate thought experiments, which involve imagining situations to assess their possible consequences in order to confirm or refute philosophical theories. The method of reflective equilibrium tries to form a coherent perspective by examining and reevaluating all the relevant beliefs and intuitions. Pragmatists focus on the practical consequences of philosophical theories to assess whether they are true or false. Experimental philosophy is a recently developed approach that uses the methodology of social psychology and the cognitive sciences for gathering empirical evidence and justifying philosophical claims.
Mathematics
In the field of mathematics, various methods can be distinguished, such as synthetic, analytic, deductive, inductive, and heuristic methods. For example, the difference between synthetic and analytic methods is that the former start from the known and proceed to the unknown while the latter seek to find a path from the unknown to the known. Geometry textbooks often proceed using the synthetic method. They start by listing known definitions and axioms and proceed by taking inferential steps, one at a time, until the solution to the initial problem is found. An important advantage of the synthetic method is its clear and short logical exposition. One disadvantage is that it is usually not obvious in the beginning that the steps taken lead to the intended conclusion. This may then come as a surprise to the reader since it is not explained how the mathematician knew in the beginning which steps to take. The analytic method often reflects better how mathematicians actually make their discoveries. For this reason, it is often seen as the better method for teaching mathematics. It starts with the intended conclusion and tries to find another formula from which it can be deduced. It then goes on to apply the same process to this new formula until it has traced back all the way to already proven theorems. The difference between the two methods concerns primarily how mathematicians think and present their proofs. The two are equivalent in the sense that the same proof may be presented either way.
Statistics
Statistics investigates the analysis, interpretation, and presentation of data. It plays a central role in many forms of quantitative research that have to deal with the data of many observations and measurements. In such cases, data analysis is used to cleanse, transform, and model the data to arrive at practically useful conclusions. There are numerous methods of data analysis. They are usually divided into descriptive statistics and inferential statistics. Descriptive statistics restricts itself to the data at hand. It tries to summarize the most salient features and present them in insightful ways. This can happen, for example, by visualizing its distribution or by calculating indices such as the mean or the standard deviation. Inferential statistics, on the other hand, uses this data based on a sample to draw inferences about the population at large. That can take the form of making generalizations and predictions or by assessing the probability of a concrete hypothesis.
Pedagogy
Pedagogy can be defined as the study or science of teaching methods. In this regard, it is the methodology of education: it investigates the methods and practices that can be applied to fulfill the aims of education. These aims include the transmission of knowledge as well as fostering skills and character traits. Its main focus is on teaching methods in the context of regular schools. But in its widest sense, it encompasses all forms of education, both inside and outside schools. In this wide sense, pedagogy is concerned with "any conscious activity by one person designed to enhance learning in another". The teaching happening this way is a process taking place between two parties: teachers and learners. Pedagogy investigates how the teacher can help the learner undergo experiences that promote their understanding of the subject matter in question.
Various influential pedagogical theories have been proposed. Mental-discipline theories were already common in ancient Greek and state that the main goal of teaching is to train intellectual capacities. They are usually based on a certain ideal of the capacities, attitudes, and values possessed by educated people. According to naturalistic theories, there is an inborn natural tendency in children to develop in a certain way. For them, pedagogy is about how to help this process happen by ensuring that the required external conditions are set up. Herbartianism identifies five essential components of teaching: preparation, presentation, association, generalization, and application. They correspond to different phases of the educational process: getting ready for it, showing new ideas, bringing these ideas in relation to known ideas, understanding the general principle behind their instances, and putting what one has learned into practice. Learning theories focus primarily on how learning takes place and formulate the proper methods of teaching based on these insights. One of them is apperception or association theory, which understands the mind primarily in terms of associations between ideas and experiences. On this view, the mind is initially a blank slate. Learning is a form of developing the mind by helping it establish the right associations. Behaviorism is a more externally oriented learning theory. It identifies learning with classical conditioning, in which the learner's behavior is shaped by presenting them with a stimulus with the goal of evoking and solidifying the desired response pattern to this stimulus.
The choice of which specific method is best to use depends on various factors, such as the subject matter and the learner's age. Interest and curiosity on the side of the student are among the key factors of learning success. This means that one important aspect of the chosen teaching method is to ensure that these motivational forces are maintained, through intrinsic or extrinsic motivation. Many forms of education also include regular assessment of the learner's progress, for example, in the form of tests. This helps to ensure that the teaching process is successful and to make adjustments to the chosen method if necessary.
Related concepts
Methodology has several related concepts, such as paradigm and algorithm. In the context of science, a paradigm is a conceptual worldview. It consists of a number of basic concepts and general theories, that determine how the studied phenomena are to be conceptualized and which scientific methods are considered reliable for studying them. Various theorists emphasize similar aspects of methodologies, for example, that they shape the general outlook on the studied phenomena and help the researcher see them in a new light.
In computer science, an algorithm is a procedure or methodology to reach the solution of a problem with a finite number of steps. Each step has to be precisely defined so it can be carried out in an unambiguous manner for each application. For example, the Euclidean algorithm is an algorithm that solves the problem of finding the greatest common divisor of two integers. It is based on simple steps like comparing the two numbers and subtracting one from the other.
See also
Philosophical methodology
Political methodology
Scientific method
Software development process
Survey methodology
References
Further reading
Berg, Bruce L., 2009, Qualitative Research Methods for the Social Sciences. Seventh Edition. Boston MA: Pearson Education Inc.
Creswell, J. (1998). Qualitative inquiry and research design: Choosing among five traditions. Thousand Oaks, California: Sage Publications.
Creswell, J. (2003). Research Design: Qualitative, Quantitative, and Mixed Methods Approaches. Thousand Oaks, California: Sage Publications.
Franklin, M.I. (2012). Understanding Research: Coping with the Quantitative-Qualitative Divide. London and New York: Routledge.
Guba, E. and Lincoln, Y. (1989). Fourth Generation Evaluation. Newbury Park, California: Sage Publications.
Herrman, C. S. (2009). "Fundamentals of Methodology", a series of papers On the Social Science Research Network (SSRN), online.
Howell, K. E. (2013) Introduction to the Philosophy of Methodology. London, UK: Sage Publications.
Ndira, E. Alana, Slater, T. and Bucknam, A. (2011). Action Research for Business, Nonprofit, and Public Administration - A Tool for Complex Times . Thousand Oaks, CA: Sage.
Joubish, Farooq Dr. (2009). Educational Research Department of Education, Federal Urdu University, Karachi, Pakistan
Patton, M. Q. (2002). Qualitative research & evaluation methods (3rd edition). Thousand Oaks, California: Sage Publications.
Silverman, David (Ed). (2011). Qualitative Research: Issues of Theory, Method and Practice, Third Edition. London, Thousand Oaks, New Delhi, Singapore: Sage Publications
Soeters, Joseph; Shields, Patricia and Rietjens, Sebastiaan. 2014. Handbook of Research Methods in Military Studies New York: Routledge.
External links
Freedictionary, usage note on the word Methodology
Researcherbook, research methodology forum and resources | 0.789411 | 0.998144 | 0.787946 |
Systems theory | Systems theory is the transdisciplinary study of systems, i.e. cohesive groups of interrelated, interdependent components that can be natural or artificial. Every system has causal boundaries, is influenced by its context, defined by its structure, function and role, and expressed through its relations with other systems. A system is "more than the sum of its parts" when it expresses synergy or emergent behavior.
Changing one component of a system may affect other components or the whole system. It may be possible to predict these changes in patterns of behavior. For systems that learn and adapt, the growth and the degree of adaptation depend upon how well the system is engaged with its environment and other contexts influencing its organization. Some systems support other systems, maintaining the other system to prevent failure. The goals of systems theory are to model a system's dynamics, constraints, conditions, and relations; and to elucidate principles (such as purpose, measure, methods, tools) that can be discerned and applied to other systems at every level of nesting, and in a wide range of fields for achieving optimized equifinality.
General systems theory is about developing broadly applicable concepts and principles, as opposed to concepts and principles specific to one domain of knowledge. It distinguishes dynamic or active systems from static or passive systems. Active systems are activity structures or components that interact in behaviours and processes or interrelate through formal contextual boundary conditions (attractors). Passive systems are structures and components that are being processed. For example, a computer program is passive when it is a file stored on the hardrive and active when it runs in memory. The field is related to systems thinking, machine logic, and systems engineering.
Overview
Systems theory is manifest in the work of practitioners in many disciplines, for example the works of physician Alexander Bogdanov, biologist Ludwig von Bertalanffy, linguist Béla H. Bánáthy, and sociologist Talcott Parsons; in the study of ecological systems by Howard T. Odum, Eugene Odum; in Fritjof Capra's study of organizational theory; in the study of management by Peter Senge; in interdisciplinary areas such as human resource development in the works of Richard A. Swanson; and in the works of educators Debora Hammond and Alfonso Montuori.
As a transdisciplinary, interdisciplinary, and multiperspectival endeavor, systems theory brings together principles and concepts from ontology, the philosophy of science, physics, computer science, biology, and engineering, as well as geography, sociology, political science, psychotherapy (especially family systems therapy), and economics.
Systems theory promotes dialogue between autonomous areas of study as well as within systems science itself. In this respect, with the possibility of misinterpretations, von Bertalanffy believed a general theory of systems "should be an important regulative device in science," to guard against superficial analogies that "are useless in science and harmful in their practical consequences."
Others remain closer to the direct systems concepts developed by the original systems theorists. For example, Ilya Prigogine, of the Center for Complex Quantum Systems at the University of Texas, has studied emergent properties, suggesting that they offer analogues for living systems. The distinction of autopoiesis as made by Humberto Maturana and Francisco Varela represent further developments in this field. Important names in contemporary systems science include Russell Ackoff, Ruzena Bajcsy, Béla H. Bánáthy, Gregory Bateson, Anthony Stafford Beer, Peter Checkland, Barbara Grosz, Brian Wilson, Robert L. Flood, Allenna Leonard, Radhika Nagpal, Fritjof Capra, Warren McCulloch, Kathleen Carley, Michael C. Jackson, Katia Sycara, and Edgar Morin among others.
With the modern foundations for a general theory of systems following World War I, Ervin László, in the preface for Bertalanffy's book, Perspectives on General System Theory, points out that the translation of "general system theory" from German into English has "wrought a certain amount of havoc":
Theorie (or Lehre) "has a much broader meaning in German than the closest English words 'theory' and 'science'," just as Wissenschaft (or 'Science'). These ideas refer to an organized body of knowledge and "any systematically presented set of concepts, whether empirically, axiomatically, or philosophically" represented, while many associate Lehre with theory and science in the etymology of general systems, though it also does not translate from the German very well; its "closest equivalent" translates to 'teaching', but "sounds dogmatic and off the mark." An adequate overlap in meaning is found within the word "nomothetic", which can mean "having the capability to posit long-lasting sense." While the idea of a "general systems theory" might have lost many of its root meanings in the translation, by defining a new way of thinking about science and scientific paradigms, systems theory became a widespread term used for instance to describe the interdependence of relationships created in organizations.
A system in this frame of reference can contain regularly interacting or interrelating groups of activities. For example, in noting the influence in the evolution of "an individually oriented industrial psychology [into] a systems and developmentally oriented organizational psychology," some theorists recognize that organizations have complex social systems; separating the parts from the whole reduces the overall effectiveness of organizations. This difference, from conventional models that center on individuals, structures, departments and units, separates in part from the whole, instead of recognizing the interdependence between groups of individuals, structures and processes that enable an organization to function.
László explains that the new systems view of organized complexity went "one step beyond the Newtonian view of organized simplicity" which reduced the parts from the whole, or understood the whole without relation to the parts. The relationship between organisations and their environments can be seen as the foremost source of complexity and interdependence. In most cases, the whole has properties that cannot be known from analysis of the constituent elements in isolation.
Béla H. Bánáthy, who argued—along with the founders of the systems society—that "the benefit of humankind" is the purpose of science, has made significant and far-reaching contributions to the area of systems theory. For the Primer Group at the International Society for the System Sciences, Bánáthy defines a perspective that iterates this view:
Applications
Art
Biology
Systems biology is a movement that draws on several trends in bioscience research. Proponents describe systems biology as a biology-based interdisciplinary study field that focuses on complex interactions in biological systems, claiming that it uses a new perspective (holism instead of reduction).
Particularly from the year 2000 onwards, the biosciences use the term widely and in a variety of contexts. An often stated ambition of systems biology is the modelling and discovery of emergent properties which represents properties of a system whose theoretical description requires the only possible useful techniques to fall under the remit of systems biology. It is thought that Ludwig von Bertalanffy may have created the term systems biology in 1928.
Subdisciplines of systems biology include:
Systems neuroscience
Systems pharmacology
Ecology
Systems ecology is an interdisciplinary field of ecology that takes a holistic approach to the study of ecological systems, especially ecosystems; it can be seen as an application of general systems theory to ecology.
Central to the systems ecology approach is the idea that an ecosystem is a complex system exhibiting emergent properties. Systems ecology focuses on interactions and transactions within and between biological and ecological systems, and is especially concerned with the way the functioning of ecosystems can be influenced by human interventions. It uses and extends concepts from thermodynamics and develops other macroscopic descriptions of complex systems.
Chemistry
Systems chemistry is the science of studying networks of interacting molecules, to create new functions from a set (or library) of molecules with different hierarchical levels and emergent properties. Systems chemistry is also related to the origin of life (abiogenesis).
Engineering
Systems engineering is an interdisciplinary approach and means for enabling the realisation and deployment of successful systems. It can be viewed as the application of engineering techniques to the engineering of systems, as well as the application of a systems approach to engineering efforts. Systems engineering integrates other disciplines and specialty groups into a team effort, forming a structured development process that proceeds from concept to production to operation and disposal. Systems engineering considers both the business and the technical needs of all customers, with the goal of providing a quality product that meets the user's needs.
User-centered design process
Systems thinking is a crucial part of user-centered design processes and is necessary to understand the whole impact of a new human computer interaction (HCI) information system. Overlooking this and developing software without insights input from the future users (mediated by user experience designers) is a serious design flaw that can lead to complete failure of information systems, increased stress and mental illness for users of information systems leading to increased costs and a huge waste of resources. It is currently surprisingly uncommon for organizations and governments to investigate the project management decisions leading to serious design flaws and lack of usability.
The Institute of Electrical and Electronics Engineers estimates that roughly 15% of the estimated $1 trillion used to develop information systems every year is completely wasted and the produced systems are discarded before implementation by entirely preventable mistakes. According to the CHAOS report published in 2018 by the Standish Group, a vast majority of information systems fail or partly fail according to their survey:
Mathematics
System dynamics is an approach to understanding the nonlinear behaviour of complex systems over time using stocks, flows, internal feedback loops, and time delays.
Social sciences and humanities
Systems theory in anthropology
Systems theory in archaeology
Systems theory in political science
Psychology
Systems psychology is a branch of psychology that studies human behaviour and experience in complex systems.
It received inspiration from systems theory and systems thinking, as well as the basics of theoretical work from Roger Barker, Gregory Bateson, Humberto Maturana and others. It makes an approach in psychology in which groups and individuals receive consideration as systems in homeostasis. Systems psychology "includes the domain of engineering psychology, but in addition seems more concerned with societal systems and with the study of motivational, affective, cognitive and group behavior that holds the name engineering psychology."
In systems psychology, characteristics of organizational behaviour (such as individual needs, rewards, expectations, and attributes of the people interacting with the systems) "considers this process in order to create an effective system."
Informatics
System theory has been applied in the field of neuroinformatics and connectionist cognitive science. Attempts are being made in neurocognition to merge connectionist cognitive neuroarchitectures with the approach of system theory and dynamical systems theory.
History
Precursors
Systems thinking can date back to antiquity, whether considering the first systems of written communication with Sumerian cuneiform to Maya numerals, or the feats of engineering with the Egyptian pyramids. Differentiated from Western rationalist traditions of philosophy, C. West Churchman often identified with the I Ching as a systems approach sharing a frame of reference similar to pre-Socratic philosophy and Heraclitus. Ludwig von Bertalanffy traced systems concepts to the philosophy of Gottfried Leibniz and Nicholas of Cusa's coincidentia oppositorum. While modern systems can seem considerably more complicated, they may embed themselves in history.
Figures like James Joule and Sadi Carnot represent an important step to introduce the systems approach into the (rationalist) hard sciences of the 19th century, also known as the energy transformation. Then, the thermodynamics of this century, by Rudolf Clausius, Josiah Gibbs and others, established the system reference model as a formal scientific object.
Similar ideas are found in learning theories that developed from the same fundamental concepts, emphasising how understanding results from knowing concepts both in part and as a whole. In fact, Bertalanffy's organismic psychology paralleled the learning theory of Jean Piaget. Some consider interdisciplinary perspectives critical in breaking away from industrial age models and thinking, wherein history represents history and math represents math, while the arts and sciences specialization remain separate and many treat teaching as behaviorist conditioning.
The contemporary work of Peter Senge provides detailed discussion of the commonplace critique of educational systems grounded in conventional assumptions about learning, including the problems with fragmented knowledge and lack of holistic learning from the "machine-age thinking" that became a "model of school separated from daily life." In this way, some systems theorists attempt to provide alternatives to, and evolved ideation from orthodox theories which have grounds in classical assumptions, including individuals such as Max Weber and Émile Durkheim in sociology and Frederick Winslow Taylor in scientific management. The theorists sought holistic methods by developing systems concepts that could integrate with different areas.
Some may view the contradiction of reductionism in conventional theory (which has as its subject a single part) as simply an example of changing assumptions. The emphasis with systems theory shifts from parts to the organization of parts, recognizing interactions of the parts as not static and constant but dynamic processes. Some questioned the conventional closed systems with the development of open systems perspectives. The shift originated from absolute and universal authoritative principles and knowledge to relative and general conceptual and perceptual knowledge and still remains in the tradition of theorists that sought to provide means to organize human life. In other words, theorists rethought the preceding history of ideas; they did not lose them. Mechanistic thinking was particularly critiqued, especially the industrial-age mechanistic metaphor for the mind from interpretations of Newtonian mechanics by Enlightenment philosophers and later psychologists that laid the foundations of modern organizational theory and management by the late 19th century.
Founding and early development
Where assumptions in Western science from Plato and Aristotle to Isaac Newton's Principia (1687) have historically influenced all areas from the hard to social sciences (see, David Easton's seminal development of the "political system" as an analytical construct), the original systems theorists explored the implications of 20th-century advances in terms of systems.
Between 1929 and 1951, Robert Maynard Hutchins at the University of Chicago had undertaken efforts to encourage innovation and interdisciplinary research in the social sciences, aided by the Ford Foundation with the university's interdisciplinary Division of the Social Sciences established in 1931.
Many early systems theorists aimed at finding a general systems theory that could explain all systems in all fields of science.
"General systems theory" (GST; German: allgemeine Systemlehre) was coined in the 1940s by Ludwig von Bertalanffy, who sought a new approach to the study of living systems. Bertalanffy developed the theory via lectures beginning in 1937 and then via publications beginning in 1946. According to Mike C. Jackson (2000), Bertalanffy promoted an embryonic form of GST as early as the 1920s and 1930s, but it was not until the early 1950s that it became more widely known in scientific circles.
Jackson also claimed that Bertalanffy's work was informed by Alexander Bogdanov's three-volume Tectology (1912–1917), providing the conceptual base for GST. A similar position is held by Richard Mattessich (1978) and Fritjof Capra (1996). Despite this, Bertalanffy never even mentioned Bogdanov in his works.
The systems view was based on several fundamental ideas. First, all phenomena can be viewed as a web of relationships among elements, or a system. Second, all systems, whether electrical, biological, or social, have common patterns, behaviors, and properties that the observer can analyze and use to develop greater insight into the behavior of complex phenomena and to move closer toward a unity of the sciences. System philosophy, methodology and application are complementary to this science.
Cognizant of advances in science that questioned classical assumptions in the organizational sciences, Bertalanffy's idea to develop a theory of systems began as early as the interwar period, publishing "An Outline for General Systems Theory" in the British Journal for the Philosophy of Science by 1950.
In 1954, von Bertalanffy, along with Anatol Rapoport, Ralph W. Gerard, and Kenneth Boulding, came together at the Center for Advanced Study in the Behavioral Sciences in Palo Alto to discuss the creation of a "society for the advancement of General Systems Theory." In December that year, a meeting of around 70 people was held in Berkeley to form a society for the exploration and development of GST. The Society for General Systems Research (renamed the International Society for Systems Science in 1988) was established in 1956 thereafter as an affiliate of the American Association for the Advancement of Science (AAAS), specifically catalyzing systems theory as an area of study. The field developed from the work of Bertalanffy, Rapoport, Gerard, and Boulding, as well as other theorists in the 1950s like William Ross Ashby, Margaret Mead, Gregory Bateson, and C. West Churchman, among others.
Bertalanffy's ideas were adopted by others, working in mathematics, psychology, biology, game theory, and social network analysis. Subjects that were studied included those of complexity, self-organization, connectionism and adaptive systems. In fields like cybernetics, researchers such as Ashby, Norbert Wiener, John von Neumann, and Heinz von Foerster examined complex systems mathematically; Von Neumann discovered cellular automata and self-reproducing systems, again with only pencil and paper. Aleksandr Lyapunov and Jules Henri Poincaré worked on the foundations of chaos theory without any computer at all. At the same time, Howard T. Odum, known as a radiation ecologist, recognized that the study of general systems required a language that could depict energetics, thermodynamics and kinetics at any system scale. To fulfill this role, Odum developed a general system, or universal language, based on the circuit language of electronics, known as the Energy Systems Language.
The Cold War affected the research project for systems theory in ways that sorely disappointed many of the seminal theorists. Some began to recognize that theories defined in association with systems theory had deviated from the initial general systems theory view. Economist Kenneth Boulding, an early researcher in systems theory, had concerns over the manipulation of systems concepts. Boulding concluded from the effects of the Cold War that abuses of power always prove consequential and that systems theory might address such issues. Since the end of the Cold War, a renewed interest in systems theory emerged, combined with efforts to strengthen an ethical view on the subject.
In sociology, systems thinking also began in the 20th century, including Talcott Parsons' action theory and Niklas Luhmann's social systems theory. According to Rudolf Stichweh (2011):Since its beginnings the social sciences were an important part of the establishment of systems theory... [T]he two most influential suggestions were the comprehensive sociological versions of systems theory which were proposed by Talcott Parsons since the 1950s and by Niklas Luhmann since the 1970s.Elements of systems thinking can also be seen in the work of James Clerk Maxwell, particularly control theory.
General systems research and systems inquiry
Many early systems theorists aimed at finding a general systems theory that could explain all systems in all fields of science. Ludwig von Bertalanffy began developing his 'general systems theory' via lectures in 1937 and then via publications from 1946. The concept received extensive focus in his 1968 book, General System Theory: Foundations, Development, Applications.
There are many definitions of a general system, some properties that definitions include are: an overall goal of the system, parts of the system and relationships between these parts, and emergent properties of the interaction between the parts of the system that are not performed by any part on its own. Derek Hitchins defines a system in terms of entropy as a collection of parts and relationships between the parts where the parts of their interrelationships decrease entropy.
Bertalanffy aimed to bring together under one heading the organismic science that he had observed in his work as a biologist. He wanted to use the word system for those principles that are common to systems in general. In General System Theory (1968), he wrote:
In the preface to von Bertalanffy's Perspectives on General System Theory, Ervin László stated:
Bertalanffy outlines systems inquiry into three major domains: philosophy, science, and technology. In his work with the Primer Group, Béla H. Bánáthy generalized the domains into four integratable domains of systemic inquiry:
philosophy: the ontology, epistemology, and axiology of systems
theory: a set of interrelated concepts and principles applying to all systems
methodology: the set of models, strategies, methods and tools that instrumentalize systems theory and philosophy
application: the application and interaction of the domains
These operate in a recursive relationship, he explained; integrating 'philosophy' and 'theory' as knowledge, and 'method' and 'application' as action; systems inquiry is thus knowledgeable action.
Properties of general systems
General systems may be split into a hierarchy of systems, where there is less interactions between the different systems than there is the components in the system. The alternative is heterarchy where all components within the system interact with one another. Sometimes an entire system will be represented inside another system as a part, sometimes referred to as a holon. These hierarchies of system are studied in hierarchy theory. The amount of interaction between parts of systems higher in the hierarchy and parts of the system lower in the hierarchy is reduced. If all the parts of a system are tightly coupled (interact with one another a lot) then the system cannot be decomposed into different systems. The amount of coupling between parts of a system may differ temporally, with some parts interacting more often than other, or for different processes in a system. Herbert A. Simon distinguished between decomposable, nearly decomposable and nondecomposable systems.
Russell L. Ackoff distinguished general systems by how their goals and subgoals could change over time. He distinguished between goal-maintaining, goal-seeking, multi-goal and reflective (or goal-changing) systems.
System types and fields
Theoretical fields
Chaos theory
Complex system
Control theory
Dynamical systems theory
Earth system science
Ecological systems theory
Living systems theory
Sociotechnical system
Systemics
Urban metabolism
World-systems theory
Cybernetics
Cybernetics is the study of the communication and control of regulatory feedback both in living and lifeless systems (organisms, organizations, machines), and in combinations of those. Its focus is how anything (digital, mechanical or biological) controls its behavior, processes information, reacts to information, and changes or can be changed to better accomplish those three primary tasks.
The terms systems theory and cybernetics have been widely used as synonyms. Some authors use the term cybernetic systems to denote a proper subset of the class of general systems, namely those systems that include feedback loops. However, Gordon Pask's differences of eternal interacting actor loops (that produce finite products) makes general systems a proper subset of cybernetics. In cybernetics, complex systems have been examined mathematically by such researchers as W. Ross Ashby, Norbert Wiener, John von Neumann, and Heinz von Foerster.
Threads of cybernetics began in the late 1800s that led toward the publishing of seminal works (such as Wiener's Cybernetics in 1948 and Bertalanffy's General System Theory in 1968). Cybernetics arose more from engineering fields and GST from biology. If anything, it appears that although the two probably mutually influenced each other, cybernetics had the greater influence. Bertalanffy specifically made the point of distinguishing between the areas in noting the influence of cybernetics:Systems theory is frequently identified with cybernetics and control theory. This again is incorrect. Cybernetics as the theory of control mechanisms in technology and nature is founded on the concepts of information and feedback, but as part of a general theory of systems.... [T]he model is of wide application but should not be identified with 'systems theory' in general ... [and] warning is necessary against its incautious expansion to fields for which its concepts are not made.Cybernetics, catastrophe theory, chaos theory and complexity theory have the common goal to explain complex systems that consist of a large number of mutually interacting and interrelated parts in terms of those interactions. Cellular automata, neural networks, artificial intelligence, and artificial life are related fields, but do not try to describe general (universal) complex (singular) systems. The best context to compare the different "C"-Theories about complex systems is historical, which emphasizes different tools and methodologies, from pure mathematics in the beginning to pure computer science today. Since the beginning of chaos theory, when Edward Lorenz accidentally discovered a strange attractor with his computer, computers have become an indispensable source of information. One could not imagine the study of complex systems without the use of computers today.
System types
Biological
Anatomical systems
Nervous
Sensory
Ecological systems
Living systems
Complex
Complex adaptive system
Conceptual
Coordinate
Deterministic (philosophy)
Digital ecosystem
Experimental
Writing
Coupled human–environment
Database
Deterministic (science)
Mathematical
Dynamical system
Formal system
Economic
Energy
Holarchical
Information
Legal
Measurement
Imperial
Metric
Multi-agent
Nonlinear
Operating
Planetary
Political
Social
Star
Complex adaptive systems
Complex adaptive systems (CAS), coined by John H. Holland, Murray Gell-Mann, and others at the interdisciplinary Santa Fe Institute, are special cases of complex systems: they are complex in that they are diverse and composed of multiple, interconnected elements; they are adaptive in that they have the capacity to change and learn from experience.
In contrast to control systems, in which negative feedback dampens and reverses disequilibria, CAS are often subject to positive feedback, which magnifies and perpetuates changes, converting local irregularities into global features.
See also
List of types of systems theory
Glossary of systems theory
Autonomous agency theory
Bibliography of sociology
Cellular automata
Chaos theory
Complexity
Emergence
Engaged theory
Fractal
Grey box model
Irreducible complexity
Meta-systems
Multidimensional systems
Open and closed systems in social science
Pattern language
Recursion (computer science)
Reductionism
Redundancy (engineering)
Reversal theory
Social rule system theory
Sociotechnical system
Sociology and complexity science
Structure–organization–process
Systemantics
System identification
Systematics – study of multi-term systems
Systemics
Systemography
Systems science
Theoretical ecology
Tektology
User-in-the-loop
Viable system theory
Viable systems approach
World-systems theory
Structuralist economics
Dependency theory
Hierarchy theory
Organizations
List of systems sciences organizations
References
Further reading
Ashby, W. Ross. 1956. An Introduction to Cybernetics. Chapman & Hall.
—— 1960. Design for a Brain: The Origin of Adaptive Behavior (2nd ed.). Chapman & Hall.
Bateson, Gregory. 1972. Steps to an Ecology of Mind: Collected essays in Anthropology, Psychiatry, Evolution, and Epistemology. University of Chicago Press.
von Bertalanffy, Ludwig. 1968. General System Theory: Foundations, Development, Applications New York: George Braziller
Burks, Arthur. 1970. Essays on Cellular Automata. University of Illinois Press.
Cherry, Colin. 1957. On Human Communication: A Review, a Survey, and a Criticism. Cambridge: The MIT Press.
Churchman, C. West. 1971. The Design of Inquiring Systems: Basic Concepts of Systems and Organizations. New York: Basic Books.
Checkland, Peter. 1999. Systems Thinking, Systems Practice: Includes a 30-Year Retrospective. Wiley.
Gleick, James. 1997. Chaos: Making a New Science, Random House.
Haken, Hermann. 1983. Synergetics: An Introduction – 3rd Edition, Springer.
Holland, John H. 1992. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence. Cambridge: The MIT Press.
Luhmann, Niklas. 2013. Introduction to Systems Theory, Polity.
Macy, Joanna. 1991. Mutual Causality in Buddhism and General Systems Theory: The Dharma of Natural Systems. SUNY Press.
Maturana, Humberto, and Francisco Varela. 1980. Autopoiesis and Cognition: The Realization of the Living. Springer Science & Business Media.
Miller, James Grier. 1978. Living Systems. Mcgraw-Hill.
von Neumann, John. 1951 "The General and Logical Theory of Automata." pp. 1–41 in Cerebral Mechanisms in Behavior.
—— 1956. "Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Components." Automata Studies 34: 43–98.
von Neumann, John, and Arthur Burks, eds. 1966. Theory of Self-Reproducing Automata. Illinois University Press.
Parsons, Talcott. 1951. The Social System. The Free Press.
Prigogine, Ilya. 1980. From Being to Becoming: Time and Complexity in the Physical Sciences. W H Freeman & Co.
Simon, Herbert A. 1962. "The Architecture of Complexity." Proceedings of the American Philosophical Society, 106.
—— 1996. The Sciences of the Artificial (3rd ed.), vol. 136. The MIT Press.
Shannon, Claude, and Warren Weaver. 1949. The Mathematical Theory of Communication. .
Adapted from Shannon, Claude. 1948. "A Mathematical Theory of Communication." Bell System Technical Journal 27(3): 379–423. .
Thom, René. 1972. Structural Stability and Morphogenesis: An Outline of a General Theory of Models. Reading, Massachusetts
Volk, Tyler. 1995. Metapatterns: Across Space, Time, and Mind. New York: Columbia University Press.
Weaver, Warren. 1948. "Science and Complexity." The American Scientist, pp. 536–544.
Wiener, Norbert. 1965. Cybernetics: Or the Control and Communication in the Animal and the Machine (2nd ed.). Cambridge: The MIT Press.
Wolfram, Stephen. 2002. A New Kind of Science. Wolfram Media.
Zadeh, Lofti. 1962. "From Circuit Theory to System Theory." Proceedings of the IRE 50(5): 856–865.
External links
Systems Thinking at Wikiversity
Systems theory at Principia Cybernetica Web
Introduction to systems thinking – 55 slides
Organizations
International Society for the System Sciences
New England Complex Systems Institute
System Dynamics Society
Emergence
Interdisciplinary subfields of sociology
Complex systems theory
Systems science | 0.789113 | 0.998331 | 0.787796 |
Pharmacodynamics | Pharmacodynamics (PD) is the study of the biochemical and physiologic effects of drugs (especially pharmaceutical drugs). The effects can include those manifested within animals (including humans), microorganisms, or combinations of organisms (for example, infection).
Pharmacodynamics and pharmacokinetics are the main branches of pharmacology, being itself a topic of biology interested in the study of the interactions of both endogenous and exogenous chemical substances with living organisms.
In particular, pharmacodynamics is the study of how a drug affects an organism, whereas pharmacokinetics is the study of how the organism affects the drug. Both together influence dosing, benefit, and adverse effects. Pharmacodynamics is sometimes abbreviated as PD and pharmacokinetics as PK, especially in combined reference (for example, when speaking of PK/PD models).
Pharmacodynamics places particular emphasis on dose–response relationships, that is, the relationships between drug concentration and effect. One dominant example is drug-receptor interactions as modeled by
L + R <=> LR
where L, R, and LR represent ligand (drug), receptor, and ligand-receptor complex concentrations, respectively. This equation represents a simplified model of reaction dynamics that can be studied mathematically through tools such as free energy maps.
Basics
There are four principal protein targets with which drugs can interact:
Enzymes (e.g. neostigmine and acetyl cholinesterase)
Inhibitors
Inducers
Activators
Membrane carriers [Reuptake vs Efflux] (e.g. tricyclic antidepressants and catecholamine uptake-1)
Enhancer (RE)
Inhibitor (RI)
Releaser (RA)
Ion channels (e.g. nimodipine and voltage-gated Ca2+ channels)
Blocker
Opener
Receptor (e.g. Listed in table below)
Agonists can be full, partial or inverse.
Antagonists can be competitive, non-competitive, or uncompetive.
Allosteric modulator can have 3 effects within a receptor. One is its capability or incapability to activate a receptor (2 possibilities). The other two are agonist affinity and efficacy. They may be increased, decreased or unaffected (3 and 3 possibilities).
NMBD = neuromuscular blocking drugs; NMDA = N-methyl-d-aspartate; EGF = epidermal growth factor.
Effects on the body
The majority of drugs either
There are 7 main drug actions:
stimulating action through direct receptor agonism and downstream effects
depressing action through direct receptor agonism and downstream effects (ex.: inverse agonist)
blocking/antagonizing action (as with silent antagonists), the drug binds the receptor but does not activate it
stabilizing action, the drug seems to act neither as a stimulant or as a depressant (ex.: some drugs possess receptor activity that allows them to stabilize general receptor activation, like buprenorphine in opioid dependent individuals or aripiprazole in schizophrenia, all depending on the dose and the recipient)
exchanging/replacing substances or accumulating them to form a reserve (ex.: glycogen storage)
direct beneficial chemical reaction as in free radical scavenging
direct harmful chemical reaction which might result in damage or destruction of the cells, through induced toxic or lethal damage (cytotoxicity or irritation)
Desired activity
The desired activity of a drug is mainly due to successful targeting of one of the following:
Cellular membrane disruption
Chemical reaction with downstream effects
Interaction with enzyme proteins
Interaction with structural proteins
Interaction with carrier proteins
Interaction with ion channels
Ligand binding to receptors:
Hormone receptors
Neuromodulator receptors
Neurotransmitter receptors
General anesthetics were once thought to work by disordering the neural membranes, thereby altering the Na+ influx. Antacids and chelating agents combine chemically in the body. Enzyme-substrate binding is a way to alter the production or metabolism of key endogenous chemicals, for example aspirin irreversibly inhibits the enzyme prostaglandin synthetase (cyclooxygenase) thereby preventing inflammatory response. Colchicine, a drug for gout, interferes with the function of the structural protein tubulin, while digitalis, a drug still used in heart failure, inhibits the activity of the carrier molecule, Na-K-ATPase pump. The widest class of drugs act as ligands that bind to receptors that determine cellular effects. Upon drug binding, receptors can elicit their normal action (agonist), blocked action (antagonist), or even action opposite to normal (inverse agonist).
In principle, a pharmacologist would aim for a target plasma concentration of the drug for a desired level of response. In reality, there are many factors affecting this goal. Pharmacokinetic factors determine peak concentrations, and concentrations cannot be maintained with absolute consistency because of metabolic breakdown and excretory clearance. Genetic factors may exist which would alter metabolism or drug action itself, and a patient's immediate status may also affect indicated dosage.
Undesirable effects
Undesirable effects of a drug include:
Increased probability of cell mutation (carcinogenic activity)
A multitude of simultaneous assorted actions which may be deleterious
Interaction (additive, multiplicative, or metabolic)
Induced physiological damage, or abnormal chronic conditions
Therapeutic window
The therapeutic window is the amount of a medication between the amount that gives an effect (effective dose) and the amount that gives more adverse effects than desired effects. For instance, medication with a small pharmaceutical window must be administered with care and control, e.g. by frequently measuring blood concentration of the drug, since it easily loses effects or gives adverse effects.
Duration of action
The duration of action of a drug is the length of time that particular drug is effective. Duration of action is a function of several parameters including plasma half-life, the time to equilibrate between plasma and target compartments, and the off rate of the drug from its biological target.
Recreational drug use
In recreational psychoactive drug spaces, duration refers to the length of time over which the subjective effects of a psychoactive substance manifest themselves.
Duration can be broken down into 6 parts: (1) total duration (2) onset (3) come up (4) peak (5) offset and (6) after effects. Depending upon the substance consumed, each of these occurs in a separate and continuous fashion.
Total
The total duration of a substance can be defined as the amount of time it takes for the effects of a substance to completely wear off into sobriety, starting from the moment the substance is first administered.
Onset
The onset phase can be defined as the period until the very first changes in perception (i.e. "first alerts") are able to be detected.
Come up
The "come up" phase can be defined as the period between the first noticeable changes in perception and the point of highest subjective intensity. This is colloquially known as "coming up."
Peak
The peak phase can be defined as period of time in which the intensity of the substance's effects are at its height.
Offset
The offset phase can be defined as the amount of time in between the conclusion of the peak and shifting into a sober state. This is colloquially referred to as "coming down."
After effects
The after effects can be defined as any residual effects which may remain after the experience has reached its conclusion. After effects depend on the substance and usage. This is colloquially known as a "hangover" for negative after effects of substances, such as alcohol, cocaine, and MDMA or an "afterglow" for describing a typically positive, pleasant effect, typically found in substances such as cannabis, LSD in low to high doses, and ketamine.
Receptor binding and effect
The binding of ligands (drug) to receptors is governed by the law of mass action which relates the large-scale status to the rate of numerous molecular processes. The rates of formation and un-formation can be used to determine the equilibrium concentration of bound receptors. The equilibrium dissociation constant is defined by:
L + R <=> LR
where L=ligand, R=receptor, square brackets [] denote concentration. The fraction of bound receptors is
Where is the fraction of receptor bound by the ligand.
This expression is one way to consider the effect of a drug, in which the response is related to the fraction of bound receptors (see: Hill equation). The fraction of bound receptors is known as occupancy. The relationship between occupancy and pharmacological response is usually non-linear. This explains the so-called receptor reserve phenomenon i.e. the concentration producing 50% occupancy is typically higher than the concentration producing 50% of maximum response. More precisely, receptor reserve refers to a phenomenon whereby stimulation of only a fraction of the whole receptor population apparently elicits the maximal effect achievable in a particular tissue.
The simplest interpretation of receptor reserve is that it is a model that states there are excess receptors on the cell surface than what is necessary for full effect. Taking a more sophisticated approach, receptor reserve is an integrative measure of the response-inducing capacity of an agonist (in some receptor models it is termed intrinsic efficacy or intrinsic activity) and of the signal amplification capacity of the corresponding receptor (and its downstream signaling pathways). Thus, the existence (and magnitude) of receptor reserve depends on the agonist (efficacy), tissue (signal amplification ability) and measured effect (pathways activated to cause signal amplification). As receptor reserve is very sensitive to agonist's intrinsic efficacy, it is usually defined only for full (high-efficacy) agonists.
Often the response is determined as a function of log[L] to consider many orders of magnitude of concentration. However, there is no biological or physical theory that relates effects to the log of concentration. It is just convenient for graphing purposes. It is useful to note that 50% of the receptors are bound when [L]=Kd .
The graph shown represents the conc-response for two hypothetical receptor agonists, plotted in a semi-log fashion. The curve toward the left represents a higher potency (potency arrow does not indicate direction of increase) since lower concentrations are needed for a given response. The effect increases as a function of concentration.
Multicellular pharmacodynamics
The concept of pharmacodynamics has been expanded to include Multicellular Pharmacodynamics (MCPD). MCPD is the study of the static and dynamic properties and relationships between a set of drugs and a dynamic and diverse multicellular four-dimensional organization. It is the study of the workings of a drug on a minimal multicellular system (mMCS), both in vivo and in silico. Networked Multicellular Pharmacodynamics (Net-MCPD) further extends the concept of MCPD to model regulatory genomic networks together with signal transduction pathways, as part of a complex of interacting components in the cell.
Toxicodynamics
Pharmacokinetics and pharmacodynamics are termed toxicokinetics and toxicodynamics in the field of ecotoxicology. Here, the focus is on toxic effects on a wide range of organisms. The corresponding models are called toxicokinetic-toxicodynamic models.
See also
Mechanism of action
Dose-response relationship
Pharmacokinetics
ADME
Antimicrobial pharmacodynamics
Pharmaceutical company
Schild regression
References
External links
Vijay. (2003) Predictive software for drug design and development. Pharmaceutical Development and Regulation 1 ((3)), 159–168.
Werner, E., In silico multicellular systems biology and minimal genomes, DDT vol 8, no 24, pp 1121–1127, Dec 2003. (Introduces the concepts MCPD and Net-MCPD)
Dr. David W. A. Bourne, OU College of Pharmacy Pharmacokinetic and Pharmacodynamic Resources.
Introduction to Pharmacokinetics and Pharmacodynamics (PDF)
Pharmacy
Medicinal chemistry
Life sciences industry | 0.791731 | 0.995027 | 0.787794 |
Chemical potential | In thermodynamics, the chemical potential of a species is the energy that can be absorbed or released due to a change of the particle number of the given species, e.g. in a chemical reaction or phase transition. The chemical potential of a species in a mixture is defined as the rate of change of free energy of a thermodynamic system with respect to the change in the number of atoms or molecules of the species that are added to the system. Thus, it is the partial derivative of the free energy with respect to the amount of the species, all other species' concentrations in the mixture remaining constant. When both temperature and pressure are held constant, and the number of particles is expressed in moles, the chemical potential is the partial molar Gibbs free energy. At chemical equilibrium or in phase equilibrium, the total sum of the product of chemical potentials and stoichiometric coefficients is zero, as the free energy is at a minimum. In a system in diffusion equilibrium, the chemical potential of any chemical species is uniformly the same everywhere throughout the system.
In semiconductor physics, the chemical potential of a system of electrons at zero absolute temperature is known as the Fermi level.
Overview
Particles tend to move from higher chemical potential to lower chemical potential because this reduces the free energy. In this way, chemical potential is a generalization of "potentials" in physics such as gravitational potential. When a ball rolls down a hill, it is moving from a higher gravitational potential (higher internal energy thus higher potential for work) to a lower gravitational potential (lower internal energy). In the same way, as molecules move, react, dissolve, melt, etc., they will always tend naturally to go from a higher chemical potential to a lower one, changing the particle number, which is the conjugate variable to chemical potential.
A simple example is a system of dilute molecules diffusing in a homogeneous environment. In this system, the molecules tend to move from areas with high concentration to low concentration, until eventually, the concentration is the same everywhere. The microscopic explanation for this is based on kinetic theory and the random motion of molecules. However, it is simpler to describe the process in terms of chemical potentials: For a given temperature, a molecule has a higher chemical potential in a higher-concentration area and a lower chemical potential in a low concentration area. Movement of molecules from higher chemical potential to lower chemical potential is accompanied by a release of free energy. Therefore, it is a spontaneous process.
Another example, not based on concentration but on phase, is an ice cube on a plate above 0 °C. An H2O molecule that is in the solid phase (ice) has a higher chemical potential than a water molecule that is in the liquid phase (water) above 0 °C. When some of the ice melts, H2O molecules convert from solid to the warmer liquid where their chemical potential is lower, so the ice cube shrinks. At the temperature of the melting point, 0 °C, the chemical potentials in water and ice are the same; the ice cube neither grows nor shrinks, and the system is in equilibrium.
A third example is illustrated by the chemical reaction of dissociation of a weak acid HA (such as acetic acid, A = CH3COO−):
HA H+ + A−
Vinegar contains acetic acid. When acid molecules dissociate, the concentration of the undissociated acid molecules (HA) decreases and the concentrations of the product ions (H+ and A−) increase. Thus the chemical potential of HA decreases and the sum of the chemical potentials of H+ and A− increases. When the sums of chemical potential of reactants and products are equal the system is at equilibrium and there is no tendency for the reaction to proceed in either the forward or backward direction. This explains why vinegar is acidic, because acetic acid dissociates to some extent, releasing hydrogen ions into the solution.
Chemical potentials are important in many aspects of multi-phase equilibrium chemistry, including melting, boiling, evaporation, solubility, osmosis, partition coefficient, liquid-liquid extraction and chromatography. In each case the chemical potential of a given species at equilibrium is the same in all phases of the system.
In electrochemistry, ions do not always tend to go from higher to lower chemical potential, but they do always go from higher to lower electrochemical potential. The electrochemical potential completely characterizes all of the influences on an ion's motion, while the chemical potential includes everything except the electric force. (See below for more on this terminology.)
Thermodynamic definition
The chemical potential μi of species i (atomic, molecular or nuclear) is defined, as all intensive quantities are, by the phenomenological fundamental equation of thermodynamics. This holds for both reversible and irreversible infinitesimal processes:
where dU is the infinitesimal change of internal energy U, dS the infinitesimal change of entropy S, dV is the infinitesimal change of volume V for a thermodynamic system in thermal equilibrium, and dNi is the infinitesimal change of particle number Ni of species i as particles are added or subtracted. T is absolute temperature, S is entropy, P is pressure, and V is volume. Other work terms, such as those involving electric, magnetic or gravitational fields may be added.
From the above equation, the chemical potential is given by
This is because the internal energy U is a state function, so if its differential exists, then the differential is an exact differential such as
for independent variables x1, x2, ... , xN of U.
This expression of the chemical potential as a partial derivative of U with respect to the corresponding species particle number is inconvenient for condensed-matter systems, such as chemical solutions, as it is hard to control the volume and entropy to be constant while particles are added. A more convenient expression may be obtained by making a Legendre transformation to another thermodynamic potential: the Gibbs free energy . From the differential (for and , the product rule is applied to) and using the above expression for , a differential relation for is obtained:
As a consequence, another expression for results:
and the change in Gibbs free energy of a system that is held at constant temperature and pressure is simply
In thermodynamic equilibrium, when the system concerned is at constant temperature and pressure but can exchange particles with its external environment, the Gibbs free energy is at its minimum for the system, that is . It follows that
Use of this equality provides the means to establish the equilibrium constant for a chemical reaction.
By making further Legendre transformations from U to other thermodynamic potentials like the enthalpy and Helmholtz free energy , expressions for the chemical potential may be obtained in terms of these:
These different forms for the chemical potential are all equivalent, meaning that they have the same physical content, and may be useful in different physical situations.
Applications
The Gibbs–Duhem equation is useful because it relates individual chemical potentials. For example, in a binary mixture, at constant temperature and pressure, the chemical potentials of the two participants A and B are related by
where is the number of moles of A and is the number of moles of B. Every instance of phase or chemical equilibrium is characterized by a constant. For instance, the melting of ice is characterized by a temperature, known as the melting point at which solid and liquid phases are in equilibrium with each other. Chemical potentials can be used to explain the slopes of lines on a phase diagram by using the Clapeyron equation, which in turn can be derived from the Gibbs–Duhem equation. They are used to explain colligative properties such as melting-point depression by the application of pressure. Henry's law for the solute can be derived from Raoult's law for the solvent using chemical potentials.
History
Chemical potential was first described by the American engineer, chemist and mathematical physicist Josiah Willard Gibbs. He defined it as follows:
Gibbs later noted also that for the purposes of this definition, any chemical element or combination of elements in given proportions may be considered a substance, whether capable or not of existing by itself as a homogeneous body. This freedom to choose the boundary of the system allows the chemical potential to be applied to a huge range of systems. The term can be used in thermodynamics and physics for any system undergoing change. Chemical potential is also referred to as partial molar Gibbs energy (see also partial molar property). Chemical potential is measured in units of energy/particle or, equivalently, energy/mole.
In his 1873 paper A Method of Geometrical Representation of the Thermodynamic Properties of Substances by Means of Surfaces, Gibbs introduced the preliminary outline of the principles of his new equation able to predict or estimate the tendencies of various natural processes to ensue when bodies or systems are brought into contact. By studying the interactions of homogeneous substances in contact, i.e. bodies, being in composition part solid, part liquid, and part vapor, and by using a three-dimensional volume–entropy–internal energy graph, Gibbs was able to determine three states of equilibrium, i.e. "necessarily stable", "neutral", and "unstable", and whether or not changes will ensue. In 1876, Gibbs built on this framework by introducing the concept of chemical potential so to take into account chemical reactions and states of bodies that are chemically different from each other. In his own words from the aforementioned paper, Gibbs states:
In this description, as used by Gibbs, ε refers to the internal energy of the body, η refers to the entropy of the body, and ν is the volume of the body.
Electrochemical, internal, external, and total chemical potential
The abstract definition of chemical potential given above—total change in free energy per extra mole of substance—is more specifically called total chemical potential. If two locations have different total chemical potentials for a species, some of it may be due to potentials associated with "external" force fields (electric potential energy, gravitational potential energy, etc.), while the rest would be due to "internal" factors (density, temperature, etc.) Therefore, the total chemical potential can be split into internal chemical potential and external chemical potential:
where
i.e., the external potential is the sum of electric potential, gravitational potential, etc. (where q and m are the charge and mass of the species, Vele and h are the electric potential and height of the container, respectively, and g is the acceleration due to gravity). The internal chemical potential includes everything else besides the external potentials, such as density, temperature, and enthalpy. This formalism can be understood by assuming that the total energy of a system, , is the sum of two parts: an internal energy, , and an external energy due to the interaction of each particle with an external field, . The definition of chemical potential applied to yields the above expression for .
The phrase "chemical potential" sometimes means "total chemical potential", but that is not universal. In some fields, in particular electrochemistry, semiconductor physics, and solid-state physics, the term "chemical potential" means internal chemical potential, while the term electrochemical potential is used to mean total chemical potential.
Systems of particles
Electrons in solids
Electrons in solids have a chemical potential, defined the same way as the chemical potential of a chemical species: The change in free energy when electrons are added or removed from the system. In the case of electrons, the chemical potential is usually expressed in energy per particle rather than energy per mole, and the energy per particle is conventionally given in units of electronvolt (eV).
Chemical potential plays an especially important role in solid-state physics and is closely related to the concepts of work function, Fermi energy, and Fermi level. For example, n-type silicon has a higher internal chemical potential of electrons than p-type silicon. In a p–n junction diode at equilibrium the chemical potential (internal chemical potential) varies from the p-type to the n-type side, while the total chemical potential (electrochemical potential, or, Fermi level) is constant throughout the diode.
As described above, when describing chemical potential, one has to say "relative to what". In the case of electrons in semiconductors, internal chemical potential is often specified relative to some convenient point in the band structure, e.g., to the bottom of the conduction band. It may also be specified "relative to vacuum", to yield a quantity known as work function, however, work function varies from surface to surface even on a completely homogeneous material. Total chemical potential, on the other hand, is usually specified relative to electrical ground.
In atomic physics, the chemical potential of the electrons in an atom is sometimes said to be the negative of the atom's electronegativity. Likewise, the process of chemical potential equalization is sometimes referred to as the process of electronegativity equalization. This connection comes from the Mulliken electronegativity scale. By inserting the energetic definitions of the ionization potential and electron affinity into the Mulliken electronegativity, it is seen that the Mulliken chemical potential is a finite difference approximation of the electronic energy with respect to the number of electrons, i.e.,
Sub-nuclear particles
In recent years, thermal physics has applied the definition of chemical potential to systems in particle physics and its associated processes. For example, in a quark–gluon plasma or other QCD matter, at every point in space there is a chemical potential for photons, a chemical potential for electrons, a chemical potential for baryon number, electric charge, and so forth.
In the case of photons, photons are bosons and can very easily and rapidly appear or disappear. Therefore, at thermodynamic equilibrium, the chemical potential of photons is in most physical situations always and everywhere zero. The reason is, if the chemical potential somewhere was higher than zero, photons would spontaneously disappear from that area until the chemical potential went back to zero; likewise, if the chemical potential somewhere was less than zero, photons would spontaneously appear until the chemical potential went back to zero. Since this process occurs extremely rapidly - at least, it occurs rapidly in the presence of dense charged matter or also in the walls of the textbook example for a photon gas of blackbody radiation - it is safe to assume that the photon chemical potential here is never different from zero. A physical situation where the chemical potential for photons can differ from zero are material-filled optical microcavities, with spacings between cavity mirrors in the wavelength regime. In such two-dimensional cases, photon gases with tuneable chemical potential, much reminiscent to gases of material particles, can be observed.
Electric charge is different because it is intrinsically conserved, i.e. it can be neither created nor destroyed. It can, however, diffuse. The "chemical potential of electric charge" controls this diffusion: Electric charge, like anything else, will tend to diffuse from areas of higher chemical potential to areas of lower chemical potential. Other conserved quantities like baryon number are the same. In fact, each conserved quantity is associated with a chemical potential and a corresponding tendency to diffuse to equalize it out.
In the case of electrons, the behaviour depends on temperature and context. At low temperatures, with no positrons present, electrons cannot be created or destroyed. Therefore, there is an electron chemical potential that might vary in space, causing diffusion. At very high temperatures, however, electrons and positrons can spontaneously appear out of the vacuum (pair production), so the chemical potential of electrons by themselves becomes a less useful quantity than the chemical potential of the conserved quantities like (electrons minus positrons).
The chemical potentials of bosons and fermions is related to the number of particles and the temperature by Bose–Einstein statistics and Fermi–Dirac statistics respectively.
Ideal vs. non-ideal solutions
Generally the chemical potential is given as a sum of an ideal contribution and an excess contribution:
In an ideal solution, the chemical potential of species i (μi) is dependent on temperature and pressure.
μi0(T, P) is defined as the chemical potential of pure species i. Given this definition, the chemical potential of species i in an ideal solution is
where R is the gas constant, and is the mole fraction of species i contained in the solution. The chemical potential becomes negative infinity when , but this does not lead to nonphysical results because means that species i is not present in the system.
This equation assumes that only depends on the mole fraction contained in the solution. This neglects intermolecular interaction between species i with itself and other species [i–(j≠i)]. This can be corrected for by factoring in the coefficient of activity of species i, defined as γi. This correction yields
The plots above give a very rough picture of the ideal and non-ideal situation.
See also
Chemical equilibrium
Electrochemical potential
Equilibrium chemistry
Excess chemical potential
Fugacity
Partial molar property
Thermodynamic activity
Thermodynamic equilibrium
Sources
Citations
References
External links
Physical chemistry
Potentials
Chemical thermodynamics
Thermodynamic properties
Chemical engineering thermodynamics | 0.79065 | 0.996141 | 0.787599 |
Oxidative phosphorylation | Oxidative phosphorylation (UK , US ) or electron transport-linked phosphorylation or terminal oxidation is the metabolic pathway in which cells use enzymes to oxidize nutrients, thereby releasing chemical energy in order to produce adenosine triphosphate (ATP). In eukaryotes, this takes place inside mitochondria. Almost all aerobic organisms carry out oxidative phosphorylation. This pathway is so pervasive because it releases more energy than alternative fermentation processes such as anaerobic glycolysis.
The energy stored in the chemical bonds of glucose is released by the cell in the citric acid cycle, producing carbon dioxide and the energetic electron donors NADH and FADH. Oxidative phosphorylation uses these molecules and O2 to produce ATP, which is used throughout the cell whenever energy is needed. During oxidative phosphorylation, electrons are transferred from the electron donors to a series of electron acceptors in a series of redox reactions ending in oxygen, whose reaction releases half of the total energy.
In eukaryotes, these redox reactions are catalyzed by a series of protein complexes within the inner membrane of the cell's mitochondria, whereas, in prokaryotes, these proteins are located in the cell's outer membrane. These linked sets of proteins are called the electron transport chain. In eukaryotes, five main protein complexes are involved, whereas in prokaryotes many different enzymes are present, using a variety of electron donors and acceptors.
The energy transferred by electrons flowing through this electron transport chain is used to transport protons across the inner mitochondrial membrane, in a process called electron transport. This generates potential energy in the form of a pH gradient and the resulting electrical potential across this membrane. This store of energy is tapped when protons flow back across the membrane and down the potential energy gradient, through a large enzyme called ATP synthase in a process called chemiosmosis. The ATP synthase uses the energy to transform adenosine diphosphate (ADP) into adenosine triphosphate, in a phosphorylation reaction. The reaction is driven by the proton flow, which forces the rotation of a part of the enzyme. The ATP synthase is a rotary mechanical motor.
Although oxidative phosphorylation is a vital part of metabolism, it produces reactive oxygen species such as superoxide and hydrogen peroxide, which lead to propagation of free radicals, damaging cells and contributing to disease and, possibly, aging and senescence. The enzymes carrying out this metabolic pathway are also the target of many drugs and poisons that inhibit their activities.
Chemiosmosis
Oxidative phosphorylation works by using energy-releasing chemical reactions to drive energy-requiring reactions. The two sets of reactions are said to be coupled. This means one cannot occur without the other. The chain of redox reactions driving the flow of electrons through the electron transport chain, from electron donors such as NADH to electron acceptors such as oxygen and hydrogen (protons), is an exergonic process – it releases energy, whereas the synthesis of ATP is an endergonic process, which requires an input of energy. Both the electron transport chain and the ATP synthase are embedded in a membrane, and energy is transferred from the electron transport chain to the ATP synthase by movements of protons across this membrane, in a process called chemiosmosis. A current of protons is driven from the negative N-side of the membrane to the positive P-side through the proton-pumping enzymes of the electron transport chain. The movement of protons creates an electrochemical gradient across the membrane, is called the proton-motive force. It has two components: a difference in proton concentration (a H+ gradient, ΔpH) and a difference in electric potential, with the N-side having a negative charge.
ATP synthase releases this stored energy by completing the circuit and allowing protons to flow down the electrochemical gradient, back to the N-side of the membrane. The electrochemical gradient drives the rotation of part of the enzyme's structure and couples this motion to the synthesis of ATP.
The two components of the proton-motive force are thermodynamically equivalent: In mitochondria, the largest part of energy is provided by the potential; in alkaliphile bacteria the electrical energy even has to compensate for a counteracting inverse pH difference. Inversely, chloroplasts operate mainly on ΔpH. However, they also require a small membrane potential for the kinetics of ATP synthesis. In the case of the fusobacterium Propionigenium modestum it drives the counter-rotation of subunits a and c of the FO motor of ATP synthase.
The amount of energy released by oxidative phosphorylation is high, compared with the amount produced by anaerobic fermentation. Glycolysis produces only 2 ATP molecules, but somewhere between 30 and 36 ATPs are produced by the oxidative phosphorylation of the 10 NADH and 2 succinate molecules made by converting one molecule of glucose to carbon dioxide and water, while each cycle of beta oxidation of a fatty acid yields about 14 ATPs. These ATP yields are theoretical maximum values; in practice, some protons leak across the membrane, lowering the yield of ATP.
Electron and proton transfer molecules
The electron transport chain carries both protons and electrons, passing electrons from donors to acceptors, and transporting protons across a membrane. These processes use both soluble and protein-bound transfer molecules. In the mitochondria, electrons are transferred within the intermembrane space by the water-soluble electron transfer protein cytochrome c. This carries only electrons, and these are transferred by the reduction and oxidation of an iron atom that the protein holds within a heme group in its structure. Cytochrome c is also found in some bacteria, where it is located within the periplasmic space.
Within the inner mitochondrial membrane, the lipid-soluble electron carrier coenzyme Q10 (Q) carries both electrons and protons by a redox cycle. This small benzoquinone molecule is very hydrophobic, so it diffuses freely within the membrane. When Q accepts two electrons and two protons, it becomes reduced to the ubiquinol form (QH2); when QH2 releases two electrons and two protons, it becomes oxidized back to the ubiquinone (Q) form. As a result, if two enzymes are arranged so that Q is reduced on one side of the membrane and QH2 oxidized on the other, ubiquinone will couple these reactions and shuttle protons across the membrane. Some bacterial electron transport chains use different quinones, such as menaquinone, in addition to ubiquinone.
Within proteins, electrons are transferred between flavin cofactors, iron–sulfur clusters and cytochromes. There are several types of iron–sulfur cluster. The simplest kind found in the electron transfer chain consists of two iron atoms joined by two atoms of inorganic sulfur; these are called [2Fe–2S] clusters. The second kind, called [4Fe–4S], contains a cube of four iron atoms and four sulfur atoms. Each iron atom in these clusters is coordinated by an additional amino acid, usually by the sulfur atom of cysteine. Metal ion cofactors undergo redox reactions without binding or releasing protons, so in the electron transport chain they serve solely to transport electrons through proteins. Electrons move quite long distances through proteins by hopping along chains of these cofactors. This occurs by quantum tunnelling, which is rapid over distances of less than 1.4 m.
Eukaryotic electron transport chains
Many catabolic biochemical processes, such as glycolysis, the citric acid cycle, and beta oxidation, produce the reduced coenzyme NADH. This coenzyme contains electrons that have a high transfer potential; in other words, they will release a large amount of energy upon oxidation. However, the cell does not release this energy all at once, as this would be an uncontrollable reaction. Instead, the electrons are removed from NADH and passed to oxygen through a series of enzymes that each release a small amount of the energy. This set of enzymes, consisting of complexes I through IV, is called the electron transport chain and is found in the inner membrane of the mitochondrion. Succinate is also oxidized by the electron transport chain, but feeds into the pathway at a different point.
In eukaryotes, the enzymes in this electron transport system use the energy released from O2 by NADH to pump protons across the inner membrane of the mitochondrion. This causes protons to build up in the intermembrane space, and generates an electrochemical gradient across the membrane. The energy stored in this potential is then used by ATP synthase to produce ATP. Oxidative phosphorylation in the eukaryotic mitochondrion is the best-understood example of this process. The mitochondrion is present in almost all eukaryotes, with the exception of anaerobic protozoa such as Trichomonas vaginalis that instead reduce protons to hydrogen in a remnant mitochondrion called a hydrogenosome.
NADH-coenzyme Q oxidoreductase (complex I)
NADH-coenzyme Q oxidoreductase, also known as NADH dehydrogenase or complex I, is the first protein in the electron transport chain. Complex I is a giant enzyme with the mammalian complex I having 46 subunits and a molecular mass of about 1,000 kilodaltons (kDa). The structure is known in detail only from a bacterium; in most organisms the complex resembles a boot with a large "ball" poking out from the membrane into the mitochondrion. The genes that encode the individual proteins are contained in both the cell nucleus and the mitochondrial genome, as is the case for many enzymes present in the mitochondrion.
The reaction that is catalyzed by this enzyme is the two electron oxidation of NADH by coenzyme Q10 or ubiquinone (represented as Q in the equation below), a lipid-soluble quinone that is found in the mitochondrion membrane:
The start of the reaction, and indeed of the entire electron chain, is the binding of a NADH molecule to complex I and the donation of two electrons. The electrons enter complex I via a prosthetic group attached to the complex, flavin mononucleotide (FMN). The addition of electrons to FMN converts it to its reduced form, FMNH2. The electrons are then transferred through a series of iron–sulfur clusters: the second kind of prosthetic group present in the complex. There are both [2Fe–2S] and [4Fe–4S] iron–sulfur clusters in complex I.
As the electrons pass through this complex, four protons are pumped from the matrix into the intermembrane space. Exactly how this occurs is unclear, but it seems to involve conformational changes in complex I that cause the protein to bind protons on the N-side of the membrane and release them on the P-side of the membrane. Finally, the electrons are transferred from the chain of iron–sulfur clusters to a ubiquinone molecule in the membrane. Reduction of ubiquinone also contributes to the generation of a proton gradient, as two protons are taken up from the matrix as it is reduced to ubiquinol (QH2).
Succinate-Q oxidoreductase (complex II)
Succinate-Q oxidoreductase, also known as complex II or succinate dehydrogenase, is a second entry point to the electron transport chain. It is unusual because it is the only enzyme that is part of both the citric acid cycle and the electron transport chain. Complex II consists of four protein subunits and contains a bound flavin adenine dinucleotide (FAD) cofactor, iron–sulfur clusters, and a heme group that does not participate in electron transfer to coenzyme Q, but is believed to be important in decreasing production of reactive oxygen species. It oxidizes succinate to fumarate and reduces ubiquinone. As this reaction releases less energy than the oxidation of NADH, complex II does not transport protons across the membrane and does not contribute to the proton gradient.
In some eukaryotes, such as the parasitic worm Ascaris suum, an enzyme similar to complex II, fumarate reductase (menaquinol:fumarate
oxidoreductase, or QFR), operates in reverse to oxidize ubiquinol and reduce fumarate. This allows the worm to survive in the anaerobic environment of the large intestine, carrying out anaerobic oxidative phosphorylation with fumarate as the electron acceptor. Another unconventional function of complex II is seen in the malaria parasite Plasmodium falciparum. Here, the reversed action of complex II as an oxidase is important in regenerating ubiquinol, which the parasite uses in an unusual form of pyrimidine biosynthesis.
Electron transfer flavoprotein-Q oxidoreductase
Electron transfer flavoprotein-ubiquinone oxidoreductase (ETF-Q oxidoreductase), also known as electron transferring-flavoprotein dehydrogenase, is a third entry point to the electron transport chain. It is an enzyme that accepts electrons from electron-transferring flavoprotein in the mitochondrial matrix, and uses these electrons to reduce ubiquinone. This enzyme contains a flavin and a [4Fe–4S] cluster, but, unlike the other respiratory complexes, it attaches to the surface of the membrane and does not cross the lipid bilayer.
In mammals, this metabolic pathway is important in beta oxidation of fatty acids and catabolism of amino acids and choline, as it accepts electrons from multiple acetyl-CoA dehydrogenases. In plants, ETF-Q oxidoreductase is also important in the metabolic responses that allow survival in extended periods of darkness.
Q-cytochrome c oxidoreductase (complex III)
Q-cytochrome c oxidoreductase is also known as cytochrome c reductase, cytochrome bc1 complex, or simply complex III. In mammals, this enzyme is a dimer, with each subunit complex containing 11 protein subunits, an [2Fe-2S] iron–sulfur cluster and three cytochromes: one cytochrome c1 and two b cytochromes. A cytochrome is a kind of electron-transferring protein that contains at least one heme group. The iron atoms inside complex III's heme groups alternate between a reduced ferrous (+2) and oxidized ferric (+3) state as the electrons are transferred through the protein.
The reaction catalyzed by complex III is the oxidation of one molecule of ubiquinol and the reduction of two molecules of cytochrome c, a heme protein loosely associated with the mitochondrion. Unlike coenzyme Q, which carries two electrons, cytochrome c carries only one electron.
As only one of the electrons can be transferred from the QH2 donor to a cytochrome c acceptor at a time, the reaction mechanism of complex III is more elaborate than those of the other respiratory complexes, and occurs in two steps called the Q cycle. In the first step, the enzyme binds three substrates, first, QH2, which is then oxidized, with one electron being passed to the second substrate, cytochrome c. The two protons released from QH2 pass into the intermembrane space. The third substrate is Q, which accepts the second electron from the QH2 and is reduced to Q.−, which is the ubisemiquinone free radical. The first two substrates are released, but this ubisemiquinone intermediate remains bound. In the second step, a second molecule of QH2 is bound and again passes its first electron to a cytochrome c acceptor. The second electron is passed to the bound ubisemiquinone, reducing it to QH2 as it gains two protons from the mitochondrial matrix. This QH2 is then released from the enzyme.
As coenzyme Q is reduced to ubiquinol on the inner side of the membrane and oxidized to ubiquinone on the other, a net transfer of protons across the membrane occurs, adding to the proton gradient. The rather complex two-step mechanism by which this occurs is important, as it increases the efficiency of proton transfer. If, instead of the Q cycle, one molecule of QH2 were used to directly reduce two molecules of cytochrome c, the efficiency would be halved, with only one proton transferred per cytochrome c reduced.
Cytochrome c oxidase (complex IV)
Cytochrome c oxidase, also known as complex IV, is the final protein complex in the electron transport chain. The mammalian enzyme has an extremely complicated structure and contains 13 subunits, two heme groups, as well as multiple metal ion cofactors – in all, three atoms of copper, one of magnesium and one of zinc.
This enzyme mediates the final reaction in the electron transport chain and transfers electrons to oxygen and hydrogen (protons), while pumping protons across the membrane. The final electron acceptor oxygen is reduced to water in this step. Both the direct pumping of protons and the consumption of matrix protons in the reduction of oxygen contribute to the proton gradient. The reaction catalyzed is the oxidation of cytochrome c and the reduction of oxygen:
Alternative reductases and oxidases
Many eukaryotic organisms have electron transport chains that differ from the much-studied mammalian enzymes described above. For example, plants have alternative NADH oxidases, which oxidize NADH in the cytosol rather than in the mitochondrial matrix, and pass these electrons to the ubiquinone pool. These enzymes do not transport protons, and, therefore, reduce ubiquinone without altering the electrochemical gradient across the inner membrane.
Another example of a divergent electron transport chain is the alternative oxidase, which is found in plants, as well as some fungi, protists, and possibly some animals. This enzyme transfers electrons directly from ubiquinol to oxygen.
The electron transport pathways produced by these alternative NADH and ubiquinone oxidases have lower ATP yields than the full pathway. The advantages produced by a shortened pathway are not entirely clear. However, the alternative oxidase is produced in response to stresses such as cold, reactive oxygen species, and infection by pathogens, as well as other factors that inhibit the full electron transport chain. Alternative pathways might, therefore, enhance an organism's resistance to injury, by reducing oxidative stress.
Organization of complexes
The original model for how the respiratory chain complexes are organized was that they diffuse freely and independently in the mitochondrial membrane. However, recent data suggest that the complexes might form higher-order structures called supercomplexes or "respirasomes". In this model, the various complexes exist as organized sets of interacting enzymes. These associations might allow channeling of substrates between the various enzyme complexes, increasing the rate and efficiency of electron transfer. Within such mammalian supercomplexes, some components would be present in higher amounts than others, with some data suggesting a ratio between complexes I/II/III/IV and the ATP synthase of approximately 1:1:3:7:4. However, the debate over this supercomplex hypothesis is not completely resolved, as some data do not appear to fit with this model.
Prokaryotic electron transport chains
In contrast to the general similarity in structure and function of the electron transport chains in eukaryotes, bacteria and archaea possess a large variety of electron-transfer enzymes. These use an equally wide set of chemicals as substrates. In common with eukaryotes, prokaryotic electron transport uses the energy released from the oxidation of a substrate to pump ions across a membrane and generate an electrochemical gradient. In the bacteria, oxidative phosphorylation in Escherichia coli is understood in most detail, while archaeal systems are at present poorly understood.
The main difference between eukaryotic and prokaryotic oxidative phosphorylation is that bacteria and archaea use many different substances to donate or accept electrons. This allows prokaryotes to grow under a wide variety of environmental conditions. In E. coli, for example, oxidative phosphorylation can be driven by a large number of pairs of reducing agents and oxidizing agents, which are listed below. The midpoint potential of a chemical measures how much energy is released when it is oxidized or reduced, with reducing agents having negative potentials and oxidizing agents positive potentials.
As shown above, E. coli can grow with reducing agents such as formate, hydrogen, or lactate as electron donors, and nitrate, DMSO, or oxygen as acceptors. The larger the difference in midpoint potential between an oxidizing and reducing agent, the more energy is released when they react. Out of these compounds, the succinate/fumarate pair is unusual, as its midpoint potential is close to zero. Succinate can therefore be oxidized to fumarate if a strong oxidizing agent such as oxygen is available, or fumarate can be reduced to succinate using a strong reducing agent such as formate. These alternative reactions are catalyzed by succinate dehydrogenase and fumarate reductase, respectively.
Some prokaryotes use redox pairs that have only a small difference in midpoint potential. For example, nitrifying bacteria such as Nitrobacter oxidize nitrite to nitrate, donating the electrons to oxygen. The small amount of energy released in this reaction is enough to pump protons and generate ATP, but not enough to produce NADH or NADPH directly for use in anabolism. This problem is solved by using a nitrite oxidoreductase to produce enough proton-motive force to run part of the electron transport chain in reverse, causing complex I to generate NADH.
Prokaryotes control their use of these electron donors and acceptors by varying which enzymes are produced, in response to environmental conditions. This flexibility is possible because different oxidases and reductases use the same ubiquinone pool. This allows many combinations of enzymes to function together, linked by the common ubiquinol intermediate. These respiratory chains therefore have a modular design, with easily interchangeable sets of enzyme systems.
In addition to this metabolic diversity, prokaryotes also possess a range of isozymes – different enzymes that catalyze the same reaction. For example, in E. coli, there are two different types of ubiquinol oxidase using oxygen as an electron acceptor. Under highly aerobic conditions, the cell uses an oxidase with a low affinity for oxygen that can transport two protons per electron. However, if levels of oxygen fall, they switch to an oxidase that transfers only one proton per electron, but has a high affinity for oxygen.
ATP synthase (complex V)
ATP synthase, also called complex V, is the final enzyme in the oxidative phosphorylation pathway. This enzyme is found in all forms of life and functions in the same way in both prokaryotes and eukaryotes. The enzyme uses the energy stored in a proton gradient across a membrane to drive the synthesis of ATP from ADP and phosphate (Pi). Estimates of the number of protons required to synthesize one ATP have ranged from three to four, with some suggesting cells can vary this ratio, to suit different conditions.
This phosphorylation reaction is an equilibrium, which can be shifted by altering the proton-motive force. In the absence of a proton-motive force, the ATP synthase reaction will run from right to left, hydrolyzing ATP and pumping protons out of the matrix across the membrane. However, when the proton-motive force is high, the reaction is forced to run in the opposite direction; it proceeds from left to right, allowing protons to flow down their concentration gradient and turning ADP into ATP. Indeed, in the closely related vacuolar type H+-ATPases, the hydrolysis reaction is used to acidify cellular compartments, by pumping protons and hydrolysing ATP.
ATP synthase is a massive protein complex with a mushroom-like shape. The mammalian enzyme complex contains 16 subunits and has a mass of approximately 600 kilodaltons. The portion embedded within the membrane is called FO and contains a ring of c subunits and the proton channel. The stalk and the ball-shaped headpiece is called F1 and is the site of ATP synthesis. The ball-shaped complex at the end of the F1 portion contains six proteins of two different kinds (three α subunits and three β subunits), whereas the "stalk" consists of one protein: the γ subunit, with the tip of the stalk extending into the ball of α and β subunits. Both the α and β subunits bind nucleotides, but only the β subunits catalyze the ATP synthesis reaction. Reaching along the side of the F1 portion and back into the membrane is a long rod-like subunit that anchors the α and β subunits into the base of the enzyme.
As protons cross the membrane through the channel in the base of ATP synthase, the FO proton-driven motor rotates. Rotation might be caused by changes in the ionization of amino acids in the ring of c subunits causing electrostatic interactions that propel the ring of c subunits past the proton channel. This rotating ring in turn drives the rotation of the central axle (the γ subunit stalk) within the α and β subunits. The α and β subunits are prevented from rotating themselves by the side-arm, which acts as a stator. This movement of the tip of the γ subunit within the ball of α and β subunits provides the energy for the active sites in the β subunits to undergo a cycle of movements that produces and then releases ATP.
This ATP synthesis reaction is called the binding change mechanism and involves the active site of a β subunit cycling between three states. In the "open" state, ADP and phosphate enter the active site (shown in brown in the diagram). The protein then closes up around the molecules and binds them loosely – the "loose" state (shown in red). The enzyme then changes shape again and forces these molecules together, with the active site in the resulting "tight" state (shown in pink) binding the newly produced ATP molecule with very high affinity. Finally, the active site cycles back to the open state, releasing ATP and binding more ADP and phosphate, ready for the next cycle.
In some bacteria and archaea, ATP synthesis is driven by the movement of sodium ions through the cell membrane, rather than the movement of protons. Archaea such as Methanococcus also contain the A1Ao synthase, a form of the enzyme that contains additional proteins with little similarity in sequence to other bacterial and eukaryotic ATP synthase subunits. It is possible that, in some species, the A1Ao form of the enzyme is a specialized sodium-driven ATP synthase, but this might not be true in all cases.
Oxidative phosphorylation - energetics
The transport of electrons from redox pair NAD+/ NADH to the final redox pair 1/2 O2/ H2O can be summarized as
1/2 O2 + NADH + H+ → H2O + NAD+
The potential difference between these two redox pairs is 1.14 volt, which is equivalent to -52 kcal/mol or -2600 kJ per 6 mol of O2.
When one NADH is oxidized through the electron transfer chain, three ATPs are produced, which is equivalent to 7.3 kcal/mol x 3 = 21.9 kcal/mol.
The conservation of the energy can be calculated by the following formula
Efficiency = (21.9 x 100%) / 52 = 42%
So we can conclude that when NADH is oxidized, about 42% of energy is conserved in the form of three ATPs and the remaining (58%) energy is lost as heat (unless the chemical energy of ATP under physiological conditions was underestimated).
Reactive oxygen species
Molecular oxygen is a good terminal electron acceptor because it is a strong oxidizing agent. The reduction of oxygen does involve potentially harmful intermediates. Although the transfer of four electrons and four protons reduces oxygen to water, which is harmless, transfer of one or two electrons produces superoxide or peroxide anions, which are dangerously reactive.
These reactive oxygen species and their reaction products, such as the hydroxyl radical, are very harmful to cells, as they oxidize proteins and cause mutations in DNA. This cellular damage may contribute to disease and is proposed as one cause of aging.
The cytochrome c oxidase complex is highly efficient at reducing oxygen to water, and it releases very few partly reduced intermediates; however small amounts of superoxide anion and peroxide are produced by the electron transport chain. Particularly important is the reduction of coenzyme Q in complex III, as a highly reactive ubisemiquinone free radical is formed as an intermediate in the Q cycle. This unstable species can lead to electron "leakage" when electrons transfer directly to oxygen, forming superoxide. As the production of reactive oxygen species by these proton-pumping complexes is greatest at high membrane potentials, it has been proposed that mitochondria regulate their activity to maintain the membrane potential within a narrow range that balances ATP production against oxidant generation. For instance, oxidants can activate uncoupling proteins that reduce membrane potential.
To counteract these reactive oxygen species, cells contain numerous antioxidant systems, including antioxidant vitamins such as vitamin C and vitamin E, and antioxidant enzymes such as superoxide dismutase, catalase, and peroxidases, which detoxify the reactive species, limiting damage to the cell.
Oxidative phosphorylation in hypoxic/anoxic conditions
As oxygen is fundamental for oxidative phosphorylation, a shortage in O2 level can alter ATP production rates. Under anoxic conditions, ATP-synthase will commit 'cellular treason' and run in reverse, forcing protons from the matrix back into the inner membrane space, using up ATP in the process. The proton motive force and ATP production can be maintained by intracellular acidosis. Cytosolic protons that have accumulated with ATP hydrolysis and lactic acidosis can freely diffuse across the mitochondrial outer-membrane and acidify the inter-membrane space, hence directly contributing to the proton motive force and ATP production.
Inhibitors
There are several well-known drugs and toxins that inhibit oxidative phosphorylation. Although any one of these toxins inhibits only one enzyme in the electron transport chain, inhibition of any step in this process will halt the rest of the process. For example, if oligomycin inhibits ATP synthase, protons cannot pass back into the mitochondrion. As a result, the proton pumps are unable to operate, as the gradient becomes too strong for them to overcome. NADH is then no longer oxidized and the citric acid cycle ceases to operate because the concentration of NAD+ falls below the concentration that these enzymes can use.
Many site-specific inhibitors of the electron transport chain have contributed to the present knowledge of mitochondrial respiration. Synthesis of ATP is also dependent on the electron transport chain, so all site-specific inhibitors also inhibit ATP formation. The fish poison rotenone, the barbiturate drug amytal, and the antibiotic piericidin A inhibit NADH and coenzyme Q.
Carbon monoxide, cyanide, hydrogen sulphide and azide effectively inhibit cytochrome oxidase. Carbon monoxide reacts with the reduced form of the cytochrome while cyanide and azide react with the oxidised form. An antibiotic, antimycin A, and British anti-Lewisite, an antidote used against chemical weapons, are the two important inhibitors of the site between cytochrome B and C1.
Not all inhibitors of oxidative phosphorylation are toxins. In brown adipose tissue, regulated proton channels called uncoupling proteins can uncouple respiration from ATP synthesis. This rapid respiration produces heat, and is particularly important as a way of maintaining body temperature for hibernating animals, although these proteins may also have a more general function in cells' responses to stress.
History
The field of oxidative phosphorylation began with the report in 1906 by Arthur Harden of a vital role for phosphate in cellular fermentation, but initially only sugar phosphates were known to be involved. However, in the early 1940s, the link between the oxidation of sugars and the generation of ATP was firmly established by Herman Kalckar, confirming the central role of ATP in energy transfer that had been proposed by Fritz Albert Lipmann in 1941. Later, in 1949, Morris Friedkin and Albert L. Lehninger proved that the coenzyme NADH linked metabolic pathways such as the citric acid cycle and the synthesis of ATP. The term oxidative phosphorylation was coined by in 1939.
For another twenty years, the mechanism by which ATP is generated remained mysterious, with scientists searching for an elusive "high-energy intermediate" that would link oxidation and phosphorylation reactions. This puzzle was solved by Peter D. Mitchell with the publication of the chemiosmotic theory in 1961. At first, this proposal was highly controversial, but it was slowly accepted and Mitchell was awarded a Nobel prize in 1978. Subsequent research concentrated on purifying and characterizing the enzymes involved, with major contributions being made by David E. Green on the complexes of the electron-transport chain, as well as Efraim Racker on the ATP synthase. A critical step towards solving the mechanism of the ATP synthase was provided by Paul D. Boyer, by his development in 1973 of the "binding change" mechanism, followed by his radical proposal of rotational catalysis in 1982. More recent work has included structural studies on the enzymes involved in oxidative phosphorylation by John E. Walker, with Walker and Boyer being awarded a Nobel Prize in 1997.
See also
Respirometry
TIM/TOM Complex
Notes
References
Further reading
Introductory
Advanced
General resources
Animated diagrams illustrating oxidative phosphorylation Wiley and Co Concepts in Biochemistry
On-line biophysics lectures Antony Crofts, University of Illinois at Urbana–Champaign
ATP Synthase Graham Johnson
Structural resources
PDB molecule of the month:
ATP synthase
Cytochrome c
Cytochrome c oxidase
Interactive molecular models at Universidade Fernando Pessoa:
NADH dehydrogenase
succinate dehydrogenase
Coenzyme Q - cytochrome c reductase
cytochrome c oxidase
Cellular respiration
Integral membrane proteins
Metabolism
Redox | 0.790302 | 0.996456 | 0.787501 |
Physiology | Physiology (; ) is the scientific study of functions and mechanisms in a living system. As a subdiscipline of biology, physiology focuses on how organisms, organ systems, individual organs, cells, and biomolecules carry out chemical and physical functions in a living system. According to the classes of organisms, the field can be divided into medical physiology, animal physiology, plant physiology, cell physiology, and comparative physiology.
Central to physiological functioning are biophysical and biochemical processes, homeostatic control mechanisms, and communication between cells. Physiological state is the condition of normal function. In contrast, pathological state refers to abnormal conditions, including human diseases.
The Nobel Prize in Physiology or Medicine is awarded by the Royal Swedish Academy of Sciences for exceptional scientific achievements in physiology related to the field of medicine.
Foundations
Because physiology focuses on the functions and mechanisms of living organisms at all levels, from the molecular and cellular level to the level of whole organisms and populations, its foundations span a range of key disciplines:
Anatomy is the study of the structure and organization of living organisms, from the microscopic level of cells and tissues to the macroscopic level of organs and systems. Anatomical knowledge is important in physiology because the structure and function of an organism are often dictated by one another.
Biochemistry is the study of the chemical processes and substances that occur within living organisms. Knowledge of biochemistry provides the foundation for understanding cellular and molecular processes that are essential to the functioning of organisms.
Biophysics is the study of the physical properties of living organisms and their interactions with their environment. It helps to explain how organisms sense and respond to different stimuli, such as light, sound, and temperature, and how they maintain homeostasis, or a stable internal environment.
Genetics is the study of heredity and the variation of traits within and between populations. It provides insights into the genetic basis of physiological processes and the ways in which genes interact with the environment to influence an organism's phenotype.
Evolutionary biology is the study of the processes that have led to the diversity of life on Earth. It helps to explain the origin and adaptive significance of physiological processes and the ways in which organisms have evolved to cope with their environment.
Subdisciplines
There are many ways to categorize the subdisciplines of physiology:
based on the taxa studied: human physiology, animal physiology, plant physiology, microbial physiology, viral physiology
based on the level of organization: cell physiology, molecular physiology, systems physiology, organismal physiology, ecological physiology, integrative physiology
based on the process that causes physiological variation: developmental physiology, environmental physiology, evolutionary physiology
based on the ultimate goals of the research: applied physiology (e.g., medical physiology), non-applied (e.g., comparative physiology)
Subdisciplines by level of organisation
Cell physiology
Although there are differences between animal, plant, and microbial cells, the basic physiological functions of cells can be divided into the processes of cell division, cell signaling, cell growth, and cell metabolism.
Subdisciplines by taxa
Plant physiology
Plant physiology is a subdiscipline of botany concerned with the functioning of plants. Closely related fields include plant morphology, plant ecology, phytochemistry, cell biology, genetics, biophysics, and molecular biology. Fundamental processes of plant physiology include photosynthesis, respiration, plant nutrition, tropisms, nastic movements, photoperiodism, photomorphogenesis, circadian rhythms, seed germination, dormancy, and stomata function and transpiration. Absorption of water by roots, production of food in the leaves, and growth of shoots towards light are examples of plant physiology.
Animal physiology
Human physiology
Human physiology is the study of how the human body's systems and functions work together to maintain a stable internal environment. It includes the study of the nervous, endocrine, cardiovascular, respiratory, digestive, and urinary systems, as well as cellular and exercise physiology. Understanding human physiology is essential for diagnosing and treating health conditions and promoting overall wellbeing.
It seeks to understand the mechanisms that work to keep the human body alive and functioning, through scientific enquiry into the nature of mechanical, physical, and biochemical functions of humans, their organs, and the cells of which they are composed. The principal level of focus of physiology is at the level of organs and systems within systems. The endocrine and nervous systems play major roles in the reception and transmission of signals that integrate function in animals. Homeostasis is a major aspect with regard to such interactions within plants as well as animals. The biological basis of the study of physiology, integration refers to the overlap of many functions of the systems of the human body, as well as its accompanied form. It is achieved through communication that occurs in a variety of ways, both electrical and chemical.
Changes in physiology can impact the mental functions of individuals. Examples of this would be the effects of certain medications or toxic levels of substances. Change in behavior as a result of these substances is often used to assess the health of individuals.
Much of the foundation of knowledge in human physiology was provided by animal experimentation. Due to the frequent connection between form and function, physiology and anatomy are intrinsically linked and are studied in tandem as part of a medical curriculum.
Subdisciplines by research objective
Comparative physiology
Involving evolutionary physiology and environmental physiology, comparative physiology considers the diversity of functional characteristics across organisms.
History
The classical era
The study of human physiology as a medical field originates in classical Greece, at the time of Hippocrates (late 5th century BC). Outside of Western tradition, early forms of physiology or anatomy can be reconstructed as having been present at around the same time in China, India and elsewhere. Hippocrates incorporated the theory of humorism, which consisted of four basic substances: earth, water, air and fire. Each substance is known for having a corresponding humor: black bile, phlegm, blood, and yellow bile, respectively. Hippocrates also noted some emotional connections to the four humors, on which Galen would later expand. The critical thinking of Aristotle and his emphasis on the relationship between structure and function marked the beginning of physiology in Ancient Greece. Like Hippocrates, Aristotle took to the humoral theory of disease, which also consisted of four primary qualities in life: hot, cold, wet and dry. Galen (–200 AD) was the first to use experiments to probe the functions of the body. Unlike Hippocrates, Galen argued that humoral imbalances can be located in specific organs, including the entire body. His modification of this theory better equipped doctors to make more precise diagnoses. Galen also played off of Hippocrates' idea that emotions were also tied to the humors, and added the notion of temperaments: sanguine corresponds with blood; phlegmatic is tied to phlegm; yellow bile is connected to choleric; and black bile corresponds with melancholy. Galen also saw the human body consisting of three connected systems: the brain and nerves, which are responsible for thoughts and sensations; the heart and arteries, which give life; and the liver and veins, which can be attributed to nutrition and growth. Galen was also the founder of experimental physiology. And for the next 1,400 years, Galenic physiology was a powerful and influential tool in medicine.
Early modern period
Jean Fernel (1497–1558), a French physician, introduced the term "physiology". Galen, Ibn al-Nafis, Michael Servetus, Realdo Colombo, Amato Lusitano and William Harvey, are credited as making important discoveries in the circulation of the blood. Santorio Santorio in 1610s was the first to use a device to measure the pulse rate (the pulsilogium), and a thermoscope to measure temperature.
In 1791 Luigi Galvani described the role of electricity in the nerves of dissected frogs. In 1811, César Julien Jean Legallois studied respiration in animal dissection and lesions and found the center of respiration in the medulla oblongata. In the same year, Charles Bell finished work on what would later become known as the Bell–Magendie law, which compared functional differences between dorsal and ventral roots of the spinal cord. In 1824, François Magendie described the sensory roots and produced the first evidence of the cerebellum's role in equilibration to complete the Bell–Magendie law.
In the 1820s, the French physiologist Henri Milne-Edwards introduced the notion of physiological division of labor, which allowed to "compare and study living things as if they were machines created by the industry of man." Inspired in the work of Adam Smith, Milne-Edwards wrote that the "body of all living beings, whether animal or plant, resembles a factory ... where the organs, comparable to workers, work incessantly to produce the phenomena that constitute the life of the individual." In more differentiated organisms, the functional labor could be apportioned between different instruments or systems (called by him as appareils).
In 1858, Joseph Lister studied the cause of blood coagulation and inflammation that resulted after previous injuries and surgical wounds. He later discovered and implemented antiseptics in the operating room, and as a result, decreased the death rate from surgery by a substantial amount.
The Physiological Society was founded in London in 1876 as a dining club. The American Physiological Society (APS) is a nonprofit organization that was founded in 1887. The Society is, "devoted to fostering education, scientific research, and dissemination of information in the physiological sciences."
In 1891, Ivan Pavlov performed research on "conditional responses" that involved dogs' saliva production in response to a bell and visual stimuli.
In the 19th century, physiological knowledge began to accumulate at a rapid rate, in particular with the 1838 appearance of the Cell theory of Matthias Schleiden and Theodor Schwann. It radically stated that organisms are made up of units called cells. Claude Bernard's (1813–1878) further discoveries ultimately led to his concept of milieu interieur (internal environment), which would later be taken up and championed as "homeostasis" by American physiologist Walter B. Cannon in 1929. By homeostasis, Cannon meant "the maintenance of steady states in the body and the physiological processes through which they are regulated." In other words, the body's ability to regulate its internal environment. William Beaumont was the first American to utilize the practical application of physiology.
Nineteenth-century physiologists such as Michael Foster, Max Verworn, and Alfred Binet, based on Haeckel's ideas, elaborated what came to be called "general physiology", a unified science of life based on the cell actions, later renamed in the 20th century as cell biology.
Late modern period
In the 20th century, biologists became interested in how organisms other than human beings function, eventually spawning the fields of comparative physiology and ecophysiology. Major figures in these fields include Knut Schmidt-Nielsen and George Bartholomew. Most recently, evolutionary physiology has become a distinct subdiscipline.
In 1920, August Krogh won the Nobel Prize for discovering how, in capillaries, blood flow is regulated.
In 1954, Andrew Huxley and Hugh Huxley, alongside their research team, discovered the sliding filaments in skeletal muscle, known today as the sliding filament theory.
Recently, there have been intense debates about the vitality of physiology as a discipline (Is it dead or alive?). If physiology is perhaps less visible nowadays than during the golden age of the 19th century, it is in large part because the field has given birth to some of the most active domains of today's biological sciences, such as neuroscience, endocrinology, and immunology. Furthermore, physiology is still often seen as an integrative discipline, which can put together into a coherent framework data coming from various different domains.
Notable physiologists
Women in physiology
Initially, women were largely excluded from official involvement in any physiological society. The American Physiological Society, for example, was founded in 1887 and included only men in its ranks. In 1902, the American Physiological Society elected Ida Hyde as the first female member of the society. Hyde, a representative of the American Association of University Women and a global advocate for gender equality in education, attempted to promote gender equality in every aspect of science and medicine.
Soon thereafter, in 1913, J.S. Haldane proposed that women be allowed to formally join The Physiological Society, which had been founded in 1876. On 3 July 1915, six women were officially admitted: Florence Buchanan, Winifred Cullis, Ruth Skelton, Sarah C. M. Sowton, Constance Leetham Terry, and Enid M. Tribe. The centenary of the election of women was celebrated in 2015 with the publication of the book "Women Physiologists: Centenary Celebrations And Beyond For The Physiological Society."
Prominent women physiologists include:
Bodil Schmidt-Nielsen, the first woman president of the American Physiological Society in 1975.
Gerty Cori, along with her husband Carl Cori, received the Nobel Prize in Physiology or Medicine in 1947 for their discovery of the phosphate-containing form of glucose known as glycogen, as well as its function within eukaryotic metabolic mechanisms for energy production. Moreover, they discovered the Cori cycle, also known as the Lactic acid cycle, which describes how muscle tissue converts glycogen into lactic acid via lactic acid fermentation.
Barbara McClintock was rewarded the 1983 Nobel Prize in Physiology or Medicine for the discovery of genetic transposition. McClintock is the only female recipient who has won an unshared Nobel Prize.
Gertrude Elion, along with George Hitchings and Sir James Black, received the Nobel Prize for Physiology or Medicine in 1988 for their development of drugs employed in the treatment of several major diseases, such as leukemia, some autoimmune disorders, gout, malaria, and viral herpes.
Linda B. Buck, along with Richard Axel, received the Nobel Prize in Physiology or Medicine in 2004 for their discovery of odorant receptors and the complex organization of the olfactory system.
Françoise Barré-Sinoussi, along with Luc Montagnier, received the Nobel Prize in Physiology or Medicine in 2008 for their work on the identification of the Human Immunodeficiency Virus (HIV), the cause of Acquired Immunodeficiency Syndrome (AIDS).
Elizabeth Blackburn, along with Carol W. Greider and Jack W. Szostak, was awarded the 2009 Nobel Prize for Physiology or Medicine for the discovery of the genetic composition and function of telomeres and the enzyme called telomerase.
See also
Outline of physiology
Biochemistry
Biophysics
Cytoarchitecture
Defense physiology
Ecophysiology
Exercise physiology
Fish physiology
Insect physiology
Human body
Molecular biology
Metabolome
Neurophysiology
Pathophysiology
Pharmacology
Physiome
American Physiological Society
International Union of Physiological Sciences
The Physiological Society
Brazilian Society of Physiology
References
Bibliography
Human physiology
Widmaier, E.P., Raff, H., Strang, K.T. Vander's Human Physiology. 11th Edition, McGraw-Hill, 2009.
Marieb, E.N. Essentials of Human Anatomy and Physiology. 10th Edition, Benjamin Cummings, 2012.
Animal physiology
Hill, R.W., Wyse, G.A., Anderson, M. Animal Physiology, 3rd ed. Sinauer Associates, Sunderland, 2012.
Moyes, C.D., Schulte, P.M. Principles of Animal Physiology, second edition. Pearson/Benjamin Cummings. Boston, MA, 2008.
Randall, D., Burggren, W., and French, K. Eckert Animal Physiology: Mechanism and Adaptation, 5th Edition. W.H. Freeman and Company, 2002.
Schmidt-Nielsen, K. Animal Physiology: Adaptation and Environment. Cambridge & New York: Cambridge University Press, 1997.
Withers, P.C. Comparative animal physiology. Saunders College Publishing, New York, 1992.
Plant physiology
Larcher, W. Physiological plant ecology (4th ed.). Springer, 2001.
Salisbury, F.B, Ross, C.W. Plant physiology. Brooks/Cole Pub Co., 1992
Taiz, L., Zieger, E. Plant Physiology (5th ed.), Sunderland, Massachusetts: Sinauer, 2010.
Fungal physiology
Griffin, D.H. Fungal Physiology, Second Edition. Wiley-Liss, New York, 1994.
Protistan physiology
Levandowsky, M. Physiological Adaptations of Protists. In: Cell physiology sourcebook: essentials of membrane biophysics. Amsterdam; Boston: Elsevier/AP, 2012.
Levandowski, M., Hutner, S.H. (eds). Biochemistry and physiology of protozoa. Volumes 1, 2, and 3. Academic Press: New York, NY, 1979; 2nd ed.
Laybourn-Parry J. A Functional Biology of Free-Living Protozoa. Berkeley, California: University of California Press; 1984.
Algal physiology
Lobban, C.S., Harrison, P.J. Seaweed ecology and physiology. Cambridge University Press, 1997.
Stewart, W. D. P. (ed.). Algal Physiology and Biochemistry. Blackwell Scientific Publications, Oxford, 1974.
Bacterial physiology
El-Sharoud, W. (ed.). Bacterial Physiology: A Molecular Approach. Springer-Verlag, Berlin-Heidelberg, 2008.
Kim, B.H., Gadd, M.G. Bacterial Physiology and Metabolism. Cambridge, 2008.
Moat, A.G., Foster, J.W., Spector, M.P. Microbial Physiology, 4th ed. Wiley-Liss, Inc. New York, NY, 2002.
External links
physiologyINFO.org – public information site sponsored by the American Physiological Society
Branches of biology | 0.788459 | 0.99859 | 0.787347 |
Denaturation (biochemistry) | In biochemistry, denaturation is a process in which proteins or nucleic acids lose folded structure present in their native state due to various factors, including application of some external stress or compound, such as a strong acid or base, a concentrated inorganic salt, an organic solvent (e.g., alcohol or chloroform), agitation and radiation, or heat. If proteins in a living cell are denatured, this results in disruption of cell activity and possibly cell death. Protein denaturation is also a consequence of cell death. Denatured proteins can exhibit a wide range of characteristics, from conformational change and loss of solubility or dissociation of cofactors to aggregation due to the exposure of hydrophobic groups. The loss of solubility as a result of denaturation is called coagulation. Denatured proteins lose their 3D structure, and therefore, cannot function.
Proper protein folding is key to whether a globular or membrane protein can do its job correctly; it must be folded into the native shape to function. However, hydrogen bonds and cofactor-protein binding, which play a crucial role in folding, are rather weak, and thus, easily affected by heat, acidity, varying salt concentrations, chelating agents, and other stressors which can denature the protein. This is one reason why cellular homeostasis is physiologically necessary in most life forms.
Common examples
When food is cooked, some of its proteins become denatured. This is why boiled eggs become hard and cooked meat becomes firm.
A classic example of denaturing in proteins comes from egg whites, which are typically largely egg albumins in water. Fresh from the eggs, egg whites are transparent and liquid. Cooking the thermally unstable whites turns them opaque, forming an interconnected solid mass. The same transformation can be effected with a denaturing chemical. Pouring egg whites into a beaker of acetone will also turn egg whites translucent and solid. The skin that forms on curdled milk is another common example of denatured protein. The cold appetizer known as ceviche is prepared by chemically "cooking" raw fish and shellfish in an acidic citrus marinade, without heat.
Protein denaturation
Denatured proteins can exhibit a wide range of characteristics, from loss of solubility to protein aggregation.
Background
Proteins or polypeptides are polymers of amino acids. A protein is created by ribosomes that "read" RNA that is encoded by codons in the gene and assemble the requisite amino acid combination from the genetic instruction, in a process known as translation. The newly created protein strand then undergoes posttranslational modification, in which additional atoms or molecules are added, for example copper, zinc, or iron. Once this post-translational modification process has been completed, the protein begins to fold (sometimes spontaneously and sometimes with enzymatic assistance), curling up on itself so that hydrophobic elements of the protein are buried deep inside the structure and hydrophilic elements end up on the outside. The final shape of a protein determines how it interacts with its environment.
Protein folding consists of a balance between a substantial amount of weak intra-molecular interactions within a protein (Hydrophobic, electrostatic, and Van Der Waals Interactions) and protein-solvent interactions. As a result, this process is heavily reliant on environmental state that the protein resides in. These environmental conditions include, and are not limited to, temperature, salinity, pressure, and the solvents that happen to be involved. Consequently, any exposure to extreme stresses (e.g. heat or radiation, high inorganic salt concentrations, strong acids and bases) can disrupt a protein's interaction and inevitably lead to denaturation.
When a protein is denatured, secondary and tertiary structures are altered but the peptide bonds of the primary structure between the amino acids are left intact. Since all structural levels of the protein determine its function, the protein can no longer perform its function once it has been denatured. This is in contrast to intrinsically unstructured proteins, which are unfolded in their native state, but still functionally active and tend to fold upon binding to their biological target.
How denaturation occurs at levels of protein structure
In quaternary structure denaturation, protein sub-units are dissociated and/or the spatial arrangement of protein subunits is disrupted.
Tertiary structure denaturation involves the disruption of:
Covalent interactions between amino acid side-chains (such as disulfide bridges between cysteine groups)
Non-covalent dipole-dipole interactions between polar amino acid side-chains (and the surrounding solvent)
Van der Waals (induced dipole) interactions between nonpolar amino acid side-chains.
In secondary structure denaturation, proteins lose all regular repeating patterns such as alpha-helices and beta-pleated sheets, and adopt a random coil configuration.
Primary structure, such as the sequence of amino acids held together by covalent peptide bonds, is not disrupted by denaturation.
Loss of function
Most biological substrates lose their biological function when denatured. For example, enzymes lose their activity, because the substrates can no longer bind to the active site, and because amino acid residues involved in stabilizing substrates' transition states are no longer positioned to be able to do so. The denaturing process and the associated loss of activity can be measured using techniques such as dual-polarization interferometry, CD, QCM-D and MP-SPR.
Loss of activity due to heavy metals and metalloids
By targeting proteins, heavy metals have been known to disrupt the function and activity carried out by proteins. Heavy metals fall into categories consisting of transition metals as well as a select amount of metalloid. These metals, when interacting with native, folded proteins, tend to play a role in obstructing their biological activity. This interference can be carried out in a different number of ways. These heavy metals can form a complex with the functional side chain groups present in a protein or form bonds to free thiols. Heavy metals also play a role in oxidizing amino acid side chains present in protein. Along with this, when interacting with metalloproteins, heavy metals can dislocate and replace key metal ions. As a result, heavy metals can interfere with folded proteins, which can strongly deter protein stability and activity.
Reversibility and irreversibility
In many cases, denaturation is reversible (the proteins can regain their native state when the denaturing influence is removed). This process can be called renaturation. This understanding has led to the notion that all the information needed for proteins to assume their native state was encoded in the primary structure of the protein, and hence in the DNA that codes for the protein, the so-called "Anfinsen's thermodynamic hypothesis".
Denaturation can also be irreversible. This irreversibility is typically a kinetic, not thermodynamic irreversibility, as a folded protein generally has lower free energy than when it is unfolded. Through kinetic irreversibility, the fact that the protein is stuck in a local minimum can stop it from ever refolding after it has been irreversibly denatured.
Protein denaturation due to pH
Denaturation can also be caused by changes in the pH which can affect the chemistry of the amino acids and their residues. The ionizable groups in amino acids are able to become ionized when changes in pH occur. A pH change to more acidic or more basic conditions can induce unfolding. Acid-induced unfolding often occurs between pH 2 and 5, base-induced unfolding usually requires pH 10 or higher.
Nucleic acid denaturation
Nucleic acids (including RNA and DNA) are nucleotide polymers synthesized by polymerase enzymes during either transcription or DNA replication. Following 5'-3' synthesis of the backbone, individual nitrogenous bases are capable of interacting with one another via hydrogen bonding, thus allowing for the formation of higher-order structures. Nucleic acid denaturation occurs when hydrogen bonding between nucleotides is disrupted, and results in the separation of previously annealed strands. For example, denaturation of DNA due to high temperatures results in the disruption of base pairs and the separation of the double stranded helix into two single strands. Nucleic acid strands are capable of re-annealling when "normal" conditions are restored, but if restoration occurs too quickly, the nucleic acid strands may re-anneal imperfectly resulting in the improper pairing of bases.
Biologically-induced denaturation
The non-covalent interactions between antiparallel strands in DNA can be broken in order to "open" the double helix when biologically important mechanisms such as DNA replication, transcription, DNA repair or protein binding are set to occur. The area of partially separated DNA is known as the denaturation bubble, which can be more specifically defined as the opening of a DNA double helix through the coordinated separation of base pairs.
The first model that attempted to describe the thermodynamics of the denaturation bubble was introduced in 1966 and called the Poland-Scheraga Model. This model describes the denaturation of DNA strands as a function of temperature. As the temperature increases, the hydrogen bonds between the base pairs are increasingly disturbed and "denatured loops" begin to form. However, the Poland-Scheraga Model is now considered elementary because it fails to account for the confounding implications of DNA sequence, chemical composition, stiffness and torsion.
Recent thermodynamic studies have inferred that the lifetime of a singular denaturation bubble ranges from 1 microsecond to 1 millisecond. This information is based on established timescales of DNA replication and transcription. Currently, biophysical and biochemical research studies are being performed to more fully elucidate the thermodynamic details of the denaturation bubble.
Denaturation due to chemical agents
With polymerase chain reaction (PCR) being among the most popular contexts in which DNA denaturation is desired, heating is the most frequent method of denaturation. Other than denaturation by heat, nucleic acids can undergo the denaturation process through various chemical agents such as formamide, guanidine, sodium salicylate, dimethyl sulfoxide (DMSO), propylene glycol, and urea. These chemical denaturing agents lower the melting temperature (Tm) by competing for hydrogen bond donors and acceptors with pre-existing nitrogenous base pairs. Some agents are even able to induce denaturation at room temperature. For example, alkaline agents (e.g. NaOH) have been shown to denature DNA by changing pH and removing hydrogen-bond contributing protons. These denaturants have been employed to make Denaturing Gradient Gel Electrophoresis gel (DGGE), which promotes denaturation of nucleic acids in order to eliminate the influence of nucleic acid shape on their electrophoretic mobility.
Chemical denaturation as an alternative
The optical activity (absorption and scattering of light) and hydrodynamic properties (translational diffusion, sedimentation coefficients, and rotational correlation times) of formamide denatured nucleic acids are similar to those of heat-denatured nucleic acids. Therefore, depending on the desired effect, chemically denaturing DNA can provide a gentler procedure for denaturing nucleic acids than denaturation induced by heat. Studies comparing different denaturation methods such as heating, beads mill of different bead sizes, probe sonication, and chemical denaturation show that chemical denaturation can provide quicker denaturation compared to the other physical denaturation methods described. Particularly in cases where rapid renaturation is desired, chemical denaturation agents can provide an ideal alternative to heating. For example, DNA strands denatured with alkaline agents such as NaOH renature as soon as phosphate buffer is added.
Denaturation due to air
Small, electronegative molecules such as nitrogen and oxygen, which are the primary gases in air, significantly impact the ability of surrounding molecules to participate in hydrogen bonding. These molecules compete with surrounding hydrogen bond acceptors for hydrogen bond donors, therefore acting as "hydrogen bond breakers" and weakening interactions between surrounding molecules in the environment. Antiparellel strands in DNA double helices are non-covalently bound by hydrogen bonding between base pairs; nitrogen and oxygen therefore maintain the potential to weaken the integrity of DNA when exposed to air. As a result, DNA strands exposed to air require less force to separate and exemplify lower melting temperatures.
Applications
Many laboratory techniques rely on the ability of nucleic acid strands to separate. By understanding the properties of nucleic acid denaturation, the following methods were created:
PCR
Southern blot
Northern blot
DNA sequencing
Denaturants
Protein denaturants
Acids
Acidic protein denaturants include:
Acetic acid
Trichloroacetic acid 12% in water
Sulfosalicylic acid
Bases
Bases work similarly to acids in denaturation. They include:
Sodium bicarbonate
Solvents
Most organic solvents are denaturing, including:
Ethanol
Cross-linking reagents
Cross-linking agents for proteins include:
Formaldehyde
Glutaraldehyde
Chaotropic agents
Chaotropic agents include:
Urea 6–8 mol/L
Guanidinium chloride 6 mol/L
Lithium perchlorate 4.5 mol/L
Sodium dodecyl sulfate
Disulfide bond reducers
Agents that break disulfide bonds by reduction include:
2-Mercaptoethanol
Dithiothreitol
TCEP (tris(2-carboxyethyl)phosphine)
Chemically reactive agents
Agents such as hydrogen peroxide, elemental chlorine, hypochlorous acid (chlorine water), bromine, bromine water, iodine, nitric and oxidising acids, and ozone react with sensitive moieties such as sulfide/thiol, activated aromatic rings (phenylalanine) in effect damage the protein and render it useless.
Other
Mechanical agitation
Picric acid
Radiation
Temperature
Nucleic acid denaturants
Chemical
Acidic nucleic acid denaturants include:
Acetic acid
HCl
Nitric acid
Basic nucleic acid denaturants include:
NaOH
Other nucleic acid denaturants include:
DMSO
Formamide
Guanidine
Sodium salicylate
Propylene glycol
Urea
Physical
Thermal denaturation
Beads mill
Probe sonication
Radiation
See also
Denatured alcohol
Equilibrium unfolding
Fixation (histology)
Molten globule
Protein folding
Random coil
References
External links
McGraw-Hill Online Learning Center — Animation: Protein Denaturation
Biochemical reactions
Nucleic acids
Protein structure | 0.789663 | 0.996573 | 0.786957 |
Methylation | Methylation, in the chemical sciences, is the addition of a methyl group on a substrate, or the substitution of an atom (or group) by a methyl group. Methylation is a form of alkylation, with a methyl group replacing a hydrogen atom. These terms are commonly used in chemistry, biochemistry, soil science, and biology.
In biological systems, methylation is catalyzed by enzymes; such methylation can be involved in modification of heavy metals, regulation of gene expression, regulation of protein function, and RNA processing. In vitro methylation of tissue samples is also a way to reduce some histological staining artifacts. The reverse of methylation is demethylation.
In biology
In biological systems, methylation is accomplished by enzymes. Methylation can modify heavy metals and can regulate gene expression, RNA processing, and protein function. It is a key process underlying epigenetics. Sources of methyl groups include S-methylmethionine, methyl folate, methyl B12.
Methanogenesis
Methanogenesis, the process that generates methane from CO2, involves a series of methylation reactions. These reactions are caused by a set of enzymes harbored by a family of anaerobic microbes.
In reverse methanogenesis, methane is the methylating agent.
O-methyltransferases
A wide variety of phenols undergo O-methylation to give anisole derivatives. This process, catalyzed by such enzymes as caffeoyl-CoA O-methyltransferase, is a key reaction in the biosynthesis of lignols, percursors to lignin, a major structural component of plants.
Plants produce flavonoids and isoflavones with methylations on hydroxyl groups, i.e. methoxy bonds. This 5-O-methylation affects the flavonoid's water solubility. Examples are 5-O-methylgenistein, 5-O-methylmyricetin, and 5-O-methylquercetin (azaleatin).
Proteins
Along with ubiquitination and phosphorylation, methylation is a major biochemical process for modifying protein function. The most prevalent protein methylations affect arginine and lysine residue of specific histones. Otherwise histidine, glutamate, asparagine, cysteine are susceptible to methylation. Some of these products include S-methylcysteine, two isomers of N-methylhistidine, and two isomers of N-methylarginine.
Methionine synthase
Methionine synthase regenerates methionine (Met) from homocysteine (Hcy). The overall reaction transforms 5-methyltetrahydrofolate (N5-MeTHF) into tetrahydrofolate (THF) while transferring a methyl group to Hcy to form Met. Methionine Syntheses can be cobalamin-dependent and cobalamin-independent: Plants have both, animals depend on the methylcobalamin-dependent form.
In methylcobalamin-dependent forms of the enzyme, the reaction proceeds by two steps in a ping-pong reaction. The enzyme is initially primed into a reactive state by the transfer of a methyl group from N5-MeTHF to Co(I) in enzyme-bound cobalamin ((Cob), also known as vitamine B12)) ,
, forming methyl-cobalamin(Me-Cob) that now contains Me-Co(III) and activating the enzyme. Then, a Hcy that has coordinated to an enzyme-bound zinc to form a reactive thiolate reacts with the Me-Cob. The activated methyl group is transferred from Me-Cob to the Hcy thiolate, which regenerates Co(I) in Cob, and Met is released from the enzyme.
Heavy metals: arsenic, mercury, cadmium
Biomethylation is the pathway for converting some heavy elements into more mobile or more lethal derivatives that can enter the food chain. The biomethylation of arsenic compounds starts with the formation of methanearsonates. Thus, trivalent inorganic arsenic compounds are methylated to give methanearsonate. S-adenosylmethionine is the methyl donor. The methanearsonates are the precursors to dimethylarsonates, again by the cycle of reduction (to methylarsonous acid) followed by a second methylation. Related pathways are found in the microbial methylation of mercury to methylmercury.
Epigenetic methylation
DNA methylation
DNA methylation is the conversion of the cytosine to 5-methylcytosine. The formation of Me-CpG is catalyzed by the enzyme DNA methyltransferase. In vertebrates, DNA methylation typically occurs at CpG sites (cytosine-phosphate-guanine sites—that is, sites where a cytosine is directly followed by a guanine in the DNA sequence). In mammals, DNA methylation is common in body cells, and methylation of CpG sites seems to be the default. Human DNA has about 80–90% of CpG sites methylated, but there are certain areas, known as CpG islands, that are CG-rich (high cytosine and guanine content, made up of about 65% CG residues), wherein none is methylated. These are associated with the promoters of 56% of mammalian genes, including all ubiquitously expressed genes. One to two percent of the human genome are CpG clusters, and there is an inverse relationship between CpG methylation and transcriptional activity. Methylation contributing to epigenetic inheritance can occur through either DNA methylation or protein methylation. Improper methylations of human genes can lead to disease development, including cancer.
In honey bees, DNA methylation is associated with alternative splicing and gene regulation based on functional genomic research published in 2013. In addition, DNA methylation is associated with expression changes in immune genes when honey bees were under lethal viral infection. Several review papers have been published on the topics of DNA methylation in social insects.
RNA methylation
RNA methylation occurs in different RNA species viz. tRNA, rRNA, mRNA, tmRNA, snRNA, snoRNA, miRNA, and viral RNA. Different catalytic strategies are employed for RNA methylation by a variety of RNA-methyltransferases. RNA methylation is thought to have existed before DNA methylation in the early forms of life evolving on earth.
N6-methyladenosine (m6A) is the most common and abundant methylation modification in RNA molecules (mRNA) present in eukaryotes. 5-methylcytosine (5-mC) also commonly occurs in various RNA molecules. Recent data strongly suggest that m6A and 5-mC RNA methylation affects the regulation of various biological processes such as RNA stability and mRNA translation, and that abnormal RNA methylation contributes to etiology of human diseases.
In social insects such as honey bees, RNA methylation is studied as a possible epigenetic mechanism underlying aggression via reciprocal crosses.
Protein methylation
Protein methylation typically takes place on arginine or lysine amino acid residues in the protein sequence. Arginine can be methylated once (monomethylated arginine) or twice, with either both methyl groups on one terminal nitrogen (asymmetric dimethylarginine) or one on both nitrogens (symmetric dimethylarginine), by protein arginine methyltransferases (PRMTs). Lysine can be methylated once, twice, or three times by lysine methyltransferases. Protein methylation has been most studied in the histones. The transfer of methyl groups from S-adenosyl methionine to histones is catalyzed by enzymes known as histone methyltransferases. Histones that are methylated on certain residues can act epigenetically to repress or activate gene expression. Protein methylation is one type of post-translational modification.
Evolution
Methyl metabolism is very ancient and can be found in all organisms on earth, from bacteria to humans, indicating the importance of methyl metabolism for physiology. Indeed, pharmacological inhibition of global methylation in species ranging from human, mouse, fish, fly, roundworm, plant, algae, and cyanobacteria causes the same effects on their biological rhythms, demonstrating conserved physiological roles of methylation during evolution.
In chemistry
The term methylation in organic chemistry refers to the alkylation process used to describe the delivery of a group.
Electrophilic methylation
Methylations are commonly performed using electrophilic methyl sources such as iodomethane, dimethyl sulfate, dimethyl carbonate, or tetramethylammonium chloride. Less common but more powerful (and more dangerous) methylating reagents include methyl triflate, diazomethane, and methyl fluorosulfonate (magic methyl). These reagents all react via SN2 nucleophilic substitutions. For example, a carboxylate may be methylated on oxygen to give a methyl ester; an alkoxide salt may be likewise methylated to give an ether, ; or a ketone enolate may be methylated on carbon to produce a new ketone.
The Purdie methylation is a specific for the methylation at oxygen of carbohydrates using iodomethane and silver oxide.
Eschweiler–Clarke methylation
The Eschweiler–Clarke reaction is a method for methylation of amines. This method avoids the risk of quaternization, which occurs when amines are methylated with methyl halides.
Diazomethane and trimethylsilyldiazomethane
Diazomethane and the safer analogue trimethylsilyldiazomethane methylate carboxylic acids, phenols, and even alcohols:
RCO2H + tmsCHN2 + CH3OH -> RCO2CH3 + CH3Otms + N2
The method offers the advantage that the side products are easily removed from the product mixture.
Nucleophilic methylation
Methylation sometimes involve use of nucleophilic methyl reagents. Strongly nucleophilic methylating agents include methyllithium or Grignard reagents such as methylmagnesium bromide. For example, will add methyl groups to the carbonyl (C=O) of ketones and aldehyde.:
Milder methylating agents include tetramethyltin, dimethylzinc, and trimethylaluminium.
See also
Biology topics
Bisulfite sequencing – the biochemical method used to determine the presence or absence of methyl groups on a DNA sequence
MethDB DNA Methylation Database
Microscale thermophoresis – a biophysical method to determine the methylisation state of DNA
Remethylation, the reversible removal of methyl group in methionine and 5-methylcytosine
Organic chemistry topics
Alkylation
Methoxy
Titanium–zinc methylenation
Petasis reagent
Nysted reagent
Wittig reaction
Tebbe's reagent
References
External links
deltaMasses Detection of Methylations after Mass Spectrometry
Epigenetics
Organic reactions
Post-translational modification | 0.790967 | 0.99491 | 0.786941 |
DOGMA | DOGMA, short for Developing Ontology-Grounded Methods and Applications, is the name of research project in progress at Vrije Universiteit Brussel's STARLab, Semantics Technology and Applications Research Laboratory. It is an internally funded project, concerned with the more general aspects of extracting, storing, representing and browsing information.
Methodological Root
DOGMA, as a dialect of the fact-based modeling approach, has its root in database semantics and model theory. It adheres to the fact-based information management methodology towards Conceptualization and 100% principle of ISO TR9007.
The DOGMA methodological principles include:
Data independence: the meaning of data shall be decoupled from the data itself.
Interpretation independence: unary or binary fact types (i.e. lexons) shall be adhere to formal interpretation in order to store semantics; lexons themselves do not carry semantics
Multiple views on and uses of stored conceptualization. An ontology shall be scalable and extensible.
Language neutral. An ontology shall meet multilingual needs.
Presentations independence: an ontology in DOGMA shall meet any kinds of users' needs of presentation. As an FBM dialect, DOGMA supports both graphical notations and textual presentation in a controlled language. Semantic decision tables, for example, is a means to visualize processes in a DOGMA commitment. SDRule-L is to visualize and publish ontology-based decision support models.
Concepts shall be validated by the stakeholders.
Informal textual definitions shall be provided in case the source of the ontology is missing or incomplete.
Technical introduction
DOGMA is an ontology approach and framework that is not restricted to a particular representation language. This approach has some distinguishing characteristics that make it different from traditional ontology approaches such as (i) its groundings in the linguistic representations of knowledge and (ii) the methodological separation of the domain-versus-application conceptualization, which is called the ontology double articulation principle. The idea is to enhance the potential for re-use and design scalability. Conceptualisations are materialised in terms of lexons. A lexon is a 5-tuple declaring either (in some context G):
taxonomical relationship (genus): e.g., < G, manager, is a, subsumes, person >;
non-taxonomical relationship (differentia): e.g., < G, manager, directs, directed by, company >.
Lexons could be approximately considered as a combination of an RDF/OWL triple and its inverse, or as a conceptual graph style relation (Sowa, 1984). The next section elaborates more on the notions of context.
Language versus conceptual level
Another distinguishing characteristic of DOGMA is the explicit duality (orthogonal to double articulation) in interpretation between the language level and conceptual level. The goal of this separation is primarily to disambiguate the lexical representation of terms in a lexon (on the language level) into concept definitions (on the conceptual level), which are word senses taken from lexical resources such as WordNet. The meaning of the terms in a lexon is dependent on the context of elicitation.
For example, consider a term “capital”. If this term was elicited from a typewriter manual, it has a different meaning (read: concept definition) than when elicited from a book on marketing. The intuition that a context provides here is: a context is an abstract identifier that refers to implicit or tacit assumptions in a domain, and that maps a term to its intended meaning (i.e. concept identifier) within these assumptions.
Ontology evolution
Ontologies naturally co-evolve with their communities of use. Therefore, in De Leenheer (2007) he identified a set of primitive operators for changing ontologies. We make sure these change primitives are conditional, which means that their applicability depends on pre- and post-conditions. Doing so, we guarantee that only valid structures can be built.
Context dependency types
De Leenheer and de Moor (2005) distinguished four key characteristics of context:
a context packages related knowledge: it defines part of the knowledge of a particular domain,
it disambiguates the lexical representation of concepts and relationships by distinguishing between language level and conceptual level,
it defines context dependencies between different ontological contexts and
contexts can be embedded or linked, in the sense that statements about contexts are themselves in context.
Based on this, they identified three different types of context dependencies within one ontology (intra-ontological) and between different ontologies (inter-ontological): articulation, application, and specialisation. One particular example in the sense of conceptual graph theory would be a specialisation dependency for which the dependency constraint is equivalent to the conditions for CG-specialisation
Context dependencies provide a better understanding of the whereabouts of knowledge elements and their inter-dependencies, and consequently make negotiation and application less vulnerable to ambiguity, hence more practical.
See also
Ontology engineering
Business semantics management
Data governance
Metadata management
References
Further reading
Mustafa Jarrar: "Towards Methodological Principles for Ontology Engineering". PhD Thesis. Vrije Universiteit Brussel. (May 2005)
Mustafa Jarrar: "Towards the notion of gloss, and the adoption of linguistic resources in formal ontology engineering". In proceedings of the 15th International World Wide Web Conference (WWW2006). Edinburgh, Scotland. Pages 497-503. ACM Press. . May 2006.
Mustafa Jarrar and Robert Meersman: "Ontology Engineering -The DOGMA Approach". Book Chapter (Chapter 3). In Advances in Web Semantics I. Volume LNCS 4891, Springer. 2008.
Banerjee, J., Kim, W. Kim, H., and Korth., H. (1987) Semantics and implementation of schema evolution in object-oriented databases. Proc. ACM SIGMOD Conf. Management of Data, 16(3), pp. 311–322
De Leenheer P, de Moor A (2005). Context-driven disambiguation in ontology elicitation. In P. Shvaiko and J. Euzenat (eds), Context and Ontologies: Theory, Practice, and Applications. Proc. of the 1st Context and Ontologies Workshop, AAAI/IAAI 2005, Pittsburgh, USA, pp 17–24
De Leenheer P, de Moor A, Meersman R (2007). Context dependency management in ontology engineering: a formal approach. Journal on Data Semantics VIII, LNCS 4380, Springer, pp 26–56
Jarrar, M., Demey, J., Meersman, R. (2003) On reusing conceptual data modeling for ontology engineering. Journal on Data Semantics 1(1):185–207
Spyns P, Meersman R, Jarrar M (2002). Data modeling versus ontology engineering. SIGMOD Record, 31(4), pp 12–17
Peter Spyns, Yan Tang and Robert Meersman, An Ontology Engineering Methodology for DOGMA, Journal of Applied Ontology, special issue on "Ontological Foundations for Conceptual Modeling", Giancarlo Guizzardi and Terry Halpin (eds.), Volume 3, Issue 1-2, p. 13-39 (2008).
Fact-based modeling (FBM) official website: http://www.factbasedmodeling.org/
Ontology (information science) | 0.788475 | 0.997828 | 0.786762 |
Applied science | Applied science is the application of the scientific method and scientific knowledge to attain practical goals. It includes a broad range of disciplines, such as engineering and medicine. Applied science is often contrasted with basic science, which is focused on advancing scientific theories and laws that explain and predict natural or other phenomena.
There are applied natural sciences, as well as applied formal and social sciences. Applied science examples include genetic epidemiology which applies statistics and probability theory, and applied psychology, including criminology.
Applied research
Applied research is the use of empirical methods to collect data for practical purposes. It accesses and uses accumulated theories, knowledge, methods, and techniques for a specific state, business, or client-driven purpose. In contrast to engineering, applied research does not include analyses or optimization of business, economics, and costs. Applied research can be better understood in any area when contrasting it with basic or pure research. Basic geographical research strives to create new theories and methods that aid in explaining the processes that shape the spatial structure of physical or human environments. Instead, applied research utilizes existing geographical theories and methods to comprehend and address particular empirical issues. Applied research usually has specific commercial objectives related to products, procedures, or services. The comparison of pure research and applied research provides a basic framework and direction for businesses to follow.
Applied research deals with solving practical problems and generally employs empirical methodologies. Because applied research resides in the messy real world, strict research protocols may need to be relaxed. For example, it may be impossible to use a random sample. Thus, transparency in the methodology is crucial. Implications for the interpretation of results brought about by relaxing an otherwise strict canon of methodology should also be considered.
Moreover, this type of research method applies natural sciences to human conditions:
Action research: aids firms in identifying workable solutions to issues influencing them.
Evaluation research: researchers examine available data to assist clients in making wise judgments.
Industrial research: create new goods/services that will satisfy the demands of a target market. (Industrial development would be scaling up production of the new goods/services for mass consumption to satisfy the economic demand of the customers while maximizing the ratio of the good/service output rate to resource input rate, the ratio of good/service revenue to material & energy costs, and the good/service quality. Industrial development would be considered engineering. Industrial development would fall outside the scope of applied research.)
Since applied research has a provisional close-to-the-problem and close-to-the-data orientation, it may also use a more provisional conceptual framework, such as working hypotheses or pillar questions. The OECD's Frascati Manual describes applied research as one of the three forms of research, along with basic research & experimental development.
Due to its practical focus, applied research information will be found in the literature associated with individual disciplines.
Branches
Applied research is a method of problem-solving and is also practical in areas of science, such as its presence in applied psychology. Applied psychology uses human behavior to grab information to locate a main focus in an area that can contribute to finding a resolution. More specifically, this study is applied in the area of criminal psychology. With the knowledge obtained from applied research, studies are conducted on criminals alongside their behavior to apprehend them. Moreover, the research extends to criminal investigations. Under this category, research methods demonstrate an understanding of the scientific method and social research designs used in criminological research. These reach more branches along the procedure towards the investigations, alongside laws, policy, and criminological theory.
Engineering is the practice of using natural science, mathematics, and the engineering design process to solve technical problems, increase efficiency and productivity, and improve systems.The discipline of engineering encompasses a broad range of more specialized fields of engineering, each with a more specific emphasis on particular areas of applied mathematics, applied science, and types of application. Engineering is often characterized as having four main branches: chemical engineering, civil engineering, electrical engineering, and mechanical engineering. Some scientific subfields used by engineers include thermodynamics, heat transfer, fluid mechanics, statics, dynamics, mechanics of materials, kinematics, electromagnetism, materials science, earth sciences, and engineering physics.
Medical sciences, such as medical microbiology, pharmaceutical research, and clinical virology, are applied sciences that apply biology and chemistry to medicine.
In education
In Canada, the Netherlands, and other places, the Bachelor of Applied Science (BASc) is sometimes equivalent to the Bachelor of Engineering and is classified as a professional degree. This is based on the age of the school where applied science used to include boiler making, surveying, and engineering. There are also Bachelor of Applied Science degrees in Child Studies. The BASc tends to focus more on the application of the engineering sciences. In Australia and New Zealand, this degree is awarded in various fields of study and is considered a highly specialized professional degree.
In the United Kingdom's educational system, Applied Science refers to a suite of "vocational" science qualifications that run alongside "traditional" General Certificate of Secondary Education or A-Level Sciences. Applied Science courses generally contain more coursework (also known as portfolio or internally assessed work) compared to their traditional counterparts. These are an evolution of the GNVQ qualifications offered up to 2005. These courses regularly come under scrutiny and are due for review following the Wolf Report 2011; however, their merits are argued elsewhere.
In the United States, The College of William & Mary offers an undergraduate minor as well as Master of Science and Doctor of Philosophy degrees in "applied science". Courses and research cover varied fields, including neuroscience, optics, materials science and engineering, nondestructive testing, and nuclear magnetic resonance. University of Nebraska–Lincoln offers a Bachelor of Science in applied science, an online completion Bachelor of Science in applied science, and a Master of Applied Science. Coursework is centered on science, agriculture, and natural resources with a wide range of options, including ecology, food genetics, entrepreneurship, economics, policy, animal science, and plant science. In New York City, the Bloomberg administration awarded the consortium of Cornell-Technion $100 million in City capital to construct the universities' proposed Applied Sciences campus on Roosevelt Island.
See also
Applied mathematics
Basic research
Exact sciences
Hard and soft science
Invention
Secondary research
References
External links
Branches of science | 0.789859 | 0.996006 | 0.786704 |
Earth science | Earth science or geoscience includes all fields of natural science related to the planet Earth. This is a branch of science dealing with the physical, chemical, and biological complex constitutions and synergistic linkages of Earth's four spheres: the biosphere, hydrosphere/cryosphere, atmosphere, and geosphere (or lithosphere). Earth science can be considered to be a branch of planetary science but with a much older history.
Geology
Geology is broadly the study of Earth's structure, substance, and processes. Geology is largely the study of the lithosphere, or Earth's surface, including the crust and rocks. It includes the physical characteristics and processes that occur in the lithosphere as well as how they are affected by geothermal energy. It incorporates aspects of chemistry, physics, and biology as elements of geology interact. Historical geology is the application of geology to interpret Earth history and how it has changed over time.
Geochemistry studies the chemical components and processes of the Earth. Geophysics studies the physical properties of the Earth. Paleontology studies fossilized biological material in the lithosphere. Planetary geology studies geoscience as it pertains to extraterrestrial bodies. Geomorphology studies the origin of landscapes. Structural geology studies the deformation of rocks to produce mountains and lowlands. Resource geology studies how energy resources can be obtained from minerals. Environmental geology studies how pollution and contaminants affect soil and rock. Mineralogy is the study of minerals and includes the study of mineral formation, crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. Petrology is the study of rocks, including the formation and composition of rocks. Petrography is a branch of petrology that studies the typology and classification of rocks.
Earth's interior
Plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the Earth's crust. Beneath the Earth's crust lies the mantle which is heated by the radioactive decay of heavy elements. The mantle is not quite solid and consists of magma which is in a state of semi-perpetual convection. This convection process causes the lithospheric plates to move, albeit slowly. The resulting process is known as plate tectonics. Areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the Earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform (or conservative) boundaries. Earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as part of subduction.
Plate tectonics might be thought of as the process by which the Earth is resurfaced. As the result of seafloor spreading, new crust and lithosphere is created by the flow of magma from the mantle to the near surface, through fissures, where it cools and solidifies. Through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. Volcanoes result primarily from the melting of subducted crust material. Crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface—giving birth to volcanoes.
Atmospheric science
Atmospheric science initially developed in the late-19th century as a means to forecast the weather through meteorology, the study of weather. Atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. Climatology studies the climate and climate change.
The troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up Earth's atmosphere. 75% of the mass in the atmosphere is located within the troposphere, the lowest layer. In all, the atmosphere is made up of about 78.0% nitrogen, 20.9% oxygen, and 0.92% argon, and small amounts of other gases including CO2 and water vapor. Water vapor and CO2 cause the Earth's atmosphere to catch and hold the Sun's energy through the greenhouse effect. This makes Earth's surface warm enough for liquid water and life. In addition to trapping heat, the atmosphere also protects living organisms by shielding the Earth's surface from cosmic rays. The magnetic field—created by the internal motions of the core—produces the magnetosphere which protects Earth's atmosphere from the solar wind. As the Earth is 4.5 billion years old, it would have lost its atmosphere by now if there were no protective magnetosphere.
Earth's magnetic field
Hydrology
Hydrology is the study of the hydrosphere and the movement of water on Earth. It emphasizes the study of how humans use and interact with freshwater supplies. Study of water's movement is closely related to geomorphology and other branches of Earth science. Applied hydrology involves engineering to maintain aquatic environments and distribute water supplies. Subdisciplines of hydrology include oceanography, hydrogeology, ecohydrology, and glaciology. Oceanography is the study of oceans. Hydrogeology is the study of groundwater. It includes the mapping of groundwater supplies and the analysis of groundwater contaminants. Applied hydrogeology seeks to prevent contamination of groundwater and mineral springs and make it available as drinking water. The earliest exploitation of groundwater resources dates back to 3000 BC, and hydrogeology as a science was developed by hydrologists beginning in the 17th century. Ecohydrology is the study of ecological systems in the hydrosphere. It can be divided into the physical study of aquatic ecosystems and the biological study of aquatic organisms. Ecohydrology includes the effects that organisms and aquatic ecosystems have on one another as well as how these ecoystems are affected by humans. Glaciology is the study of the cryosphere, including glaciers and coverage of the Earth by ice and snow. Concerns of glaciology include access to glacial freshwater, mitigation of glacial hazards, obtaining resources that exist beneath frozen land, and addressing the effects of climate change on the cryosphere.
Ecology
Ecology is the study of the biosphere. This includes the study of nature and of how living things interact with the Earth and one another and the consequences of that. It considers how living things use resources such as oxygen, water, and nutrients from the Earth to sustain themselves. It also considers how humans and other living creatures cause changes to nature.
Physical geography
Physical geography is the study of Earth's systems and how they interact with one another as part of a single self-contained system. It incorporates astronomy, mathematical geography, meteorology, climatology, geology, geomorphology, biology, biogeography, pedology, and soils geography. Physical geography is distinct from human geography, which studies the human populations on Earth, though it does include human effects on the environment.
Methodology
Methodologies vary depending on the nature of the subjects being studied. Studies typically fall into one of three categories: observational, experimental, or theoretical. Earth scientists often conduct sophisticated computer analysis or visit an interesting location to study earth phenomena (e.g. Antarctica or hot spot island chains).
A foundational idea in Earth science is the notion of uniformitarianism, which states that "ancient geologic features are interpreted by understanding active processes that are readily observed." In other words, any geologic processes at work in the present have operated in the same ways throughout geologic time. This enables those who study Earth history to apply knowledge of how the Earth's processes operate in the present to gain insight into how the planet has evolved and changed throughout long history.
Earth's spheres
In Earth science, it is common to conceptualize the Earth's surface as consisting of several distinct layers, often referred to as spheres: the lithosphere, the hydrosphere, the atmosphere, and the biosphere, this concept of spheres is a useful tool for understanding the Earth's surface and its various processes these correspond to rocks, water, air and life. Also included by some are the cryosphere (corresponding to ice) as a distinct portion of the hydrosphere and the pedosphere (corresponding to soil) as an active and intermixed sphere.
The following fields of science are generally categorized within the Earth sciences:
Geology describes the rocky parts of the Earth's crust (or lithosphere) and its historic development. Major subdisciplines are mineralogy and petrology, geomorphology, paleontology, stratigraphy, structural geology, engineering geology, and sedimentology.
Physical geography focuses on geography as an Earth science. Physical geography is the study of Earth's seasons, climate, atmosphere, soil, streams, landforms, and oceans. Physical geography can be divided into several branches or related fields, as follows: geomorphology, biogeography, environmental geography, palaeogeography, climatology, meteorology, coastal geography, hydrology, ecology, glaciology.
Geophysics and geodesy investigate the shape of the Earth, its reaction to forces and its magnetic and gravity fields. Geophysicists explore the Earth's core and mantle as well as the tectonic and seismic activity of the lithosphere. Geophysics is commonly used to supplement the work of geologists in developing a comprehensive understanding of crustal geology, particularly in mineral and petroleum exploration. Seismologists use geophysics to understand plate tectonic movement, as well as predict seismic activity.
Geochemistry is defined as the study of the processes that control the abundance, composition, and distribution of chemical compounds and isotopes in geologic environments. Geochemists use the tools and principles of chemistry to study the composition, structure, processes, and other physical aspects of the Earth. Major subdisciplines are aqueous geochemistry, cosmochemistry, isotope geochemistry and biogeochemistry.
Soil science covers the outermost layer of the Earth's crust that is subject to soil formation processes (or pedosphere). Major subdivisions in this field of study include edaphology and pedology.
Ecology covers the interactions between organisms and their environment. This field of study differentiates the study of Earth from the study of other planets in the Solar System, Earth being its only planet teeming with life.
Hydrology, oceanography and limnology are studies which focus on the movement, distribution, and quality of the water and involves all the components of the hydrologic cycle on the Earth and its atmosphere (or hydrosphere). "Sub-disciplines of hydrology include hydrometeorology, surface water hydrology, hydrogeology, watershed science, forest hydrology, and water chemistry."
Glaciology covers the icy parts of the Earth (or cryosphere).
Atmospheric sciences cover the gaseous parts of the Earth (or atmosphere) between the surface and the exosphere (about 1000 km). Major subdisciplines include meteorology, climatology, atmospheric chemistry, and atmospheric physics.
Earth science breakup
Atmosphere
Atmospheric chemistry
Geography
Climatology
Meteorology
Hydrometeorology
Paleoclimatology
Biosphere
Biogeochemistry
Biogeography
Ecology
Landscape ecology
Geoarchaeology
Geomicrobiology
Paleontology
Palynology
Micropaleontology
Hydrosphere
Hydrology
Hydrogeology
Limnology (freshwater science)
Oceanography (marine science)
Chemical oceanography
Physical oceanography
Biological oceanography (marine biology)
Geological oceanography (marine geology)
Paleoceanography
Lithosphere (geosphere)
Geology
Economic geology
Engineering geology
Environmental geology
Forensic geology
Historical geology
Quaternary geology
Planetary geology and planetary geography
Sedimentology
Stratigraphy
Structural geology
Geography
Human geography
Physical geography
Geochemistry
Geomorphology
Geophysics
Geochronology
Geodynamics (see also Tectonics)
Geomagnetism
Gravimetry (also part of Geodesy)
Seismology
Glaciology
Hydrogeology
Mineralogy
Crystallography
Gemology
Petrology
Petrophysics
Speleology
Volcanology
Pedosphere
Geography
Soil science
Edaphology
Pedology
Systems
Earth system science
Environmental science
Geography
Human geography
Physical geography
Gaia hypothesis
Systems ecology
Systems geology
Others
Geography
Cartography
Geoinformatics (GIScience)
Geostatistics
Geodesy and Surveying
Remote Sensing
Hydrography
Nanogeoscience
See also
American Geosciences Institute
Earth sciences graphics software
Four traditions of geography
Glossary of geology terms
List of Earth scientists
List of geoscience organizations
List of unsolved problems in geoscience
Making North America
National Association of Geoscience Teachers
Solid-earth science
Science tourism
Structure of the Earth
References
Sources
Further reading
Allaby M., 2008. Dictionary of Earth Sciences, Oxford University Press,
Korvin G., 1998. Fractal Models in the Earth Sciences, Elsvier,
Tarbuck E. J., Lutgens F. K., and Tasa D., 2002. Earth Science, Prentice Hall,
External links
Earth Science Picture of the Day, a service of Universities Space Research Association, sponsored by NASA Goddard Space Flight Center.
Geoethics in Planetary and Space Exploration.
Geology Buzz: Earth Science
Planetary science
Science-related lists | 0.788669 | 0.997397 | 0.786616 |
Stoichiometry | Stoichiometry is the relationship between the weights of reactants and products before, during, and following chemical reactions.
Stoichiometry is founded on the law of conservation of mass where the total mass of the reactants equals the total mass of the products, leading to the insight that the relations among quantities of reactants and products typically form a ratio of positive integers. This means that if the amounts of the separate reactants are known, then the amount of the product can be calculated. Conversely, if one reactant has a known quantity and the quantity of the products can be empirically determined, then the amount of the other reactants can also be calculated.
This is illustrated in the image here, where the balanced equation is:
Here, one molecule of methane reacts with two molecules of oxygen gas to yield one molecule of carbon dioxide and two molecules of water. This particular chemical equation is an example of complete combustion. Stoichiometry measures these quantitative relationships, and is used to determine the amount of products and reactants that are produced or needed in a given reaction. Describing the quantitative relationships among substances as they participate in chemical reactions is known as reaction stoichiometry. In the example above, reaction stoichiometry measures the relationship between the quantities of methane and oxygen that react to form carbon dioxide and water.
Because of the well known relationship of moles to atomic weights, the ratios that are arrived at by stoichiometry can be used to determine quantities by weight in a reaction described by a balanced equation. This is called composition stoichiometry.
Gas stoichiometry deals with reactions involving gases, where the gases are at a known temperature, pressure, and volume and can be assumed to be ideal gases. For gases, the volume ratio is ideally the same by the ideal gas law, but the mass ratio of a single reaction has to be calculated from the molecular masses of the reactants and products. In practice, because of the existence of isotopes, molar masses are used instead in calculating the mass ratio.
Etymology
The term stoichiometry was first used by Jeremias Benjamin Richter in 1792 when the first volume of Richter's (Fundamentals of Stoichiometry, or the Art of Measuring the Chemical Elements) was published. The term is derived from the Ancient Greek words "element" and "measure".
L. Darmstaedter and Ralph E. Oesper has written a useful account on this.
Definition
A stoichiometric amount or stoichiometric ratio of a reagent is the optimum amount or ratio where, assuming that the reaction proceeds to completion:
All of the reagent is consumed
There is no deficiency of the reagent
There is no excess of the reagent.
Stoichiometry rests upon the very basic laws that help to understand it better, i.e., law of conservation of mass, the law of definite proportions (i.e., the law of constant composition), the law of multiple proportions and the law of reciprocal proportions. In general, chemical reactions combine in definite ratios of chemicals. Since chemical reactions can neither create nor destroy matter, nor transmute one element into another, the amount of each element must be the same throughout the overall reaction. For example, the number of atoms of a given element X on the reactant side must equal the number of atoms of that element on the product side, whether or not all of those atoms are actually involved in a reaction.
Chemical reactions, as macroscopic unit operations, consist of simply a very large number of elementary reactions, where a single molecule reacts with another molecule. As the reacting molecules (or moieties) consist of a definite set of atoms in an integer ratio, the ratio between reactants in a complete reaction is also in integer ratio. A reaction may consume more than one molecule, and the stoichiometric number counts this number, defined as positive for products (added) and negative for reactants (removed). The unsigned coefficients are generally referred to as the stoichiometric coefficients.
Each element has an atomic mass, and considering molecules as collections of atoms, compounds have a definite molecular mass, which when expressed in daltons is numerically equal to the molar mass in g/mol. By definition, the atomic mass of carbon-12 is 12 Da, giving a molar mass of 12 g/mol. The number of molecules per mole in a substance is given by the Avogadro constant, exactly since the 2019 revision of the SI. Thus, to calculate the stoichiometry by mass, the number of molecules required for each reactant is expressed in moles and multiplied by the molar mass of each to give the mass of each reactant per mole of reaction. The mass ratios can be calculated by dividing each by the total in the whole reaction.
Elements in their natural state are mixtures of isotopes of differing mass; thus, atomic masses and thus molar masses are not exactly integers. For instance, instead of an exact 14:3 proportion, 17.04 g of ammonia consists of 14.01 g of nitrogen and 3 × 1.01 g of hydrogen, because natural nitrogen includes a small amount of nitrogen-15, and natural hydrogen includes hydrogen-2 (deuterium).
A stoichiometric reactant is a reactant that is consumed in a reaction, as opposed to a catalytic reactant, which is not consumed in the overall reaction because it reacts in one step and is regenerated in another step.
Converting grams to moles
Stoichiometry is not only used to balance chemical equations but also used in conversions, i.e., converting from grams to moles using molar mass as the conversion factor, or from grams to milliliters using density. For example, to find the amount of NaCl (sodium chloride) in 2.00 g, one would do the following:
In the above example, when written out in fraction form, the units of grams form a multiplicative identity, which is equivalent to one (g/g = 1), with the resulting amount in moles (the unit that was needed), as shown in the following equation,
Molar proportion
Stoichiometry is often used to balance chemical equations (reaction stoichiometry). For example, the two diatomic gases, hydrogen and oxygen, can combine to form a liquid, water, in an exothermic reaction, as described by the following equation:
Reaction stoichiometry describes the 2:1:2 ratio of hydrogen, oxygen, and water molecules in the above equation.
The molar ratio allows for conversion between moles of one substance and moles of another. For example, in the reaction
the amount of water that will be produced by the combustion of 0.27 moles of is obtained using the molar ratio between and of 2 to 4.
The term stoichiometry is also often used for the molar proportions of elements in stoichiometric compounds (composition stoichiometry). For example, the stoichiometry of hydrogen and oxygen in is 2:1. In stoichiometric compounds, the molar proportions are whole numbers.
Determining amount of product
Stoichiometry can also be used to find the quantity of a product yielded by a reaction. If a piece of solid copper (Cu) were added to an aqueous solution of silver nitrate, the silver (Ag) would be replaced in a single displacement reaction forming aqueous copper(II) nitrate and solid silver. How much silver is produced if 16.00 grams of Cu is added to the solution of excess silver nitrate?
The following steps would be used:
Write and balance the equation
Mass to moles: Convert grams of Cu to moles of Cu
Mole ratio: Convert moles of Cu to moles of Ag produced
Mole to mass: Convert moles of Ag to grams of Ag produced
The complete balanced equation would be:
For the mass to mole step, the mass of copper (16.00 g) would be converted to moles of copper by dividing the mass of copper by its molar mass: 63.55 g/mol.
Now that the amount of Cu in moles (0.2518) is found, we can set up the mole ratio. This is found by looking at the coefficients in the balanced equation: Cu and Ag are in a 1:2 ratio.
Now that the moles of Ag produced is known to be 0.5036 mol, we convert this amount to grams of Ag produced to come to the final answer:
This set of calculations can be further condensed into a single step:
Further examples
For propane reacting with oxygen gas, the balanced chemical equation is:
The mass of water formed if 120 g of propane is burned in excess oxygen is then
Stoichiometric ratio
Stoichiometry is also used to find the right amount of one reactant to "completely" react with the other reactant in a chemical reaction – that is, the stoichiometric amounts that would result in no leftover reactants when the reaction takes place. An example is shown below using the thermite reaction,
This equation shows that 1 mole of and 2 moles of aluminum will produce 1 mole of aluminium oxide and 2 moles of iron. So, to completely react with 85.0 g of (0.532 mol), 28.7 g (1.06 mol) of aluminium are needed.
Limiting reagent and percent yield
The limiting reagent is the reagent that limits the amount of product that can be formed and is completely consumed when the reaction is complete. An excess reactant is a reactant that is left over once the reaction has stopped due to the limiting reactant being exhausted.
Consider the equation of roasting lead(II) sulfide (PbS) in oxygen to produce lead(II) oxide (PbO) and sulfur dioxide:
To determine the theoretical yield of lead(II) oxide if 200.0 g of lead(II) sulfide and 200.0 g of oxygen are heated in an open container:
Because a lesser amount of PbO is produced for the 200.0 g of PbS, it is clear that PbS is the limiting reagent.
In reality, the actual yield is not the same as the stoichiometrically-calculated theoretical yield. Percent yield, then, is expressed in the following equation:
If 170.0 g of lead(II) oxide is obtained, then the percent yield would be calculated as follows:
Example
Consider the following reaction, in which iron(III) chloride reacts with hydrogen sulfide to produce iron(III) sulfide and hydrogen chloride:
The stoichiometric masses for this reaction are:
324.41 g , 102.25 g , 207.89 g , 218.77 g HCl
Suppose 90.0 g of reacts with 52.0 g of . To find the limiting reagent and the mass of HCl produced by the reaction, we change the above amounts by a factor of 90/324.41 and obtain the following amounts:
90.00 g , 28.37 g , 57.67 g , 60.69 g HCl
The limiting reactant (or reagent) is , since all 90.00 g of it is used up while only 28.37 g are consumed. Thus, 52.0 − 28.4 = 23.6 g left in excess. The mass of HCl produced is 60.7 g.
By looking at the stoichiometry of the reaction, one might have guessed being the limiting reactant; three times more is used compared to (324 g vs 102 g).
Different stoichiometries in competing reactions
Often, more than one reaction is possible given the same starting materials. The reactions may differ in their stoichiometry. For example, the methylation of benzene, through a Friedel–Crafts reaction using as a catalyst, may produce singly methylated, doubly methylated, or still more highly methylated products, as shown in the following example,
In this example, which reaction takes place is controlled in part by the relative concentrations of the reactants.
Stoichiometric coefficient and stoichiometric number
In lay terms, the stoichiometric coefficient of any given component is the number of molecules and/or formula units that participate in the reaction as written. A related concept is the stoichiometric number (using IUPAC nomenclature), wherein the stoichiometric coefficient is multiplied by +1 for all products and by −1 for all reactants.
For example, in the reaction , the stoichiometric number of is −1, the stoichiometric number of is −2, for it would be +1 and for it is +2.
In more technically precise terms, the stoichiometric number in a chemical reaction system of the i-th component is defined as
or
where is the number of molecules of i, and is the progress variable or extent of reaction.
The stoichiometric number represents the degree to which a chemical species participates in a reaction. The convention is to assign negative numbers to reactants (which are consumed) and positive ones to products, consistent with the convention that increasing the extent of reaction will correspond to shifting the composition from reactants towards products. However, any reaction may be viewed as going in the reverse direction, and in that point of view, would change in the negative direction in order to lower the system's Gibbs free energy. Whether a reaction actually will go in the arbitrarily selected forward direction or not depends on the amounts of the substances present at any given time, which determines the kinetics and thermodynamics, i.e., whether equilibrium lies to the right or the left of the initial state,
In reaction mechanisms, stoichiometric coefficients for each step are always integers, since elementary reactions always involve whole molecules. If one uses a composite representation of an overall reaction, some may be rational fractions. There are often chemical species present that do not participate in a reaction; their stoichiometric coefficients are therefore zero. Any chemical species that is regenerated, such as a catalyst, also has a stoichiometric coefficient of zero.
The simplest possible case is an isomerization
A → B
in which since one molecule of B is produced each time the reaction occurs, while since one molecule of A is necessarily consumed. In any chemical reaction, not only is the total mass conserved but also the numbers of atoms of each kind are conserved, and this imposes corresponding constraints on possible values for the stoichiometric coefficients.
There are usually multiple reactions proceeding simultaneously in any natural reaction system, including those in biology. Since any chemical component can participate in several reactions simultaneously, the stoichiometric number of the i-th component in the k-th reaction is defined as
so that the total (differential) change in the amount of the i-th component is
Extents of reaction provide the clearest and most explicit way of representing compositional change, although they are not yet widely used.
With complex reaction systems, it is often useful to consider both the representation of a reaction system in terms of the amounts of the chemicals present (state variables), and the representation in terms of the actual compositional degrees of freedom, as expressed by the extents of reaction . The transformation from a vector expressing the extents to a vector expressing the amounts uses a rectangular matrix whose elements are the stoichiometric numbers .
The maximum and minimum for any ξk occur whenever the first of the reactants is depleted for the forward reaction; or the first of the "products" is depleted if the reaction as viewed as being pushed in the reverse direction. This is a purely kinematic restriction on the reaction simplex, a hyperplane in composition space, or N‑space, whose dimensionality equals the number of linearly-independent chemical reactions. This is necessarily less than the number of chemical components, since each reaction manifests a relation between at least two chemicals. The accessible region of the hyperplane depends on the amounts of each chemical species actually present, a contingent fact. Different such amounts can even generate different hyperplanes, all sharing the same algebraic stoichiometry.
In accord with the principles of chemical kinetics and thermodynamic equilibrium, every chemical reaction is reversible, at least to some degree, so that each equilibrium point must be an interior point of the simplex. As a consequence, extrema for the ξs will not occur unless an experimental system is prepared with zero initial amounts of some products.
The number of physically-independent reactions can be even greater than the number of chemical components, and depends on the various reaction mechanisms. For example, there may be two (or more) reaction paths for the isomerism above. The reaction may occur by itself, but faster and with different intermediates, in the presence of a catalyst.
The (dimensionless) "units" may be taken to be molecules or moles. Moles are most commonly used, but it is more suggestive to picture incremental chemical reactions in terms of molecules. The Ns and ξs are reduced to molar units by dividing by the Avogadro constant. While dimensional mass units may be used, the comments about integers are then no longer applicable.
Stoichiometry matrix
In complex reactions, stoichiometries are often represented in a more compact form called the stoichiometry matrix. The stoichiometry matrix is denoted by the symbol N.
If a reaction network has n reactions and m participating molecular species, then the stoichiometry matrix will have correspondingly m rows and n columns.
For example, consider the system of reactions shown below:
This system comprises four reactions and five different molecular species. The stoichiometry matrix for this system can be written as:
where the rows correspond to respectively. The process of converting a reaction scheme into a stoichiometry matrix can be a lossy transformation: for example, the stoichiometries in the second reaction simplify when included in the matrix. This means that it is not always possible to recover the original reaction scheme from a stoichiometry matrix.
Often the stoichiometry matrix is combined with the rate vector, v, and the species vector, x to form a compact equation, the biochemical systems equation, describing the rates of change of the molecular species:
Gas stoichiometry
Gas stoichiometry is the quantitative relationship (ratio) between reactants and products in a chemical reaction with reactions that produce gases. Gas stoichiometry applies when the gases produced are assumed to be ideal, and the temperature, pressure, and volume of the gases are all known. The ideal gas law is used for these calculations. Often, but not always, the standard temperature and pressure (STP) are taken as 0 °C and 1 bar and used as the conditions for gas stoichiometric calculations.
Gas stoichiometry calculations solve for the unknown volume or mass of a gaseous product or reactant. For example, if we wanted to calculate the volume of gaseous produced from the combustion of 100 g of , by the reaction:
we would carry out the following calculations:
There is a 1:1 molar ratio of to in the above balanced combustion reaction, so 5.871 mol of will be formed. We will employ the ideal gas law to solve for the volume at 0 °C (273.15 K) and 1 atmosphere using the gas law constant of R = 0.08206 L·atm·K−1·mol−1:
Gas stoichiometry often involves having to know the molar mass of a gas, given the density of that gas. The ideal gas law can be re-arranged to obtain a relation between the density and the molar mass of an ideal gas:
and
and thus:
where:
P = absolute gas pressure
V = gas volume
n = amount (measured in moles)
R = universal ideal gas law constant
T = absolute gas temperature
ρ = gas density at T and P
m = mass of gas
M = molar mass of gas
Stoichiometric air-to-fuel ratios of common fuels
In the combustion reaction, oxygen reacts with the fuel, and the point where exactly all oxygen is consumed and all fuel burned is defined as the stoichiometric point. With more oxygen (overstoichiometric combustion), some of it stays unreacted. Likewise, if the combustion is incomplete due to lack of sufficient oxygen, fuel remains unreacted. (Unreacted fuel may also remain because of slow combustion or insufficient mixing of fuel and oxygen – this is not due to stoichiometry.) Different hydrocarbon fuels have different contents of carbon, hydrogen and other elements, thus their stoichiometry varies.
Oxygen makes up only 20.95% of the volume of air, and only 23.20% of its mass. The air-fuel ratios listed below are much higher than the equivalent oxygen-fuel ratios, due to the high proportion of inert gasses in the air.
Gasoline engines can run at stoichiometric air-to-fuel ratio, because gasoline is quite volatile and is mixed (sprayed or carburetted) with the air prior to ignition. Diesel engines, in contrast, run lean, with more air available than simple stoichiometry would require. Diesel fuel is less volatile and is effectively burned as it is injected.
See also
Non-stoichiometric compound
Biochemical systems equation
Chemical reaction
Chemical equation
Molecule
Molar mass
Ideal gas law
References
Zumdahl, Steven S. Chemical Principles. Houghton Mifflin, New York, 2005, pp 148–150.
Internal Combustion Engine Fundamentals, John B. Heywood
External links
Engine Combustion primer from the University of Plymouth
Free Stoichiometry Tutorials from Carnegie Mellon's ChemCollective
Stoichiometry Add-In for Microsoft Excel for calculation of molecular weights, reaction coëfficients and stoichiometry.
Reaction Stoichiometry Calculator a comprehensive free online reaction stoichiometry calculator.
Chemical reaction engineering | 0.788627 | 0.997374 | 0.786556 |
Process engineering | Process engineering is the understanding and application of the fundamental principles and laws of nature that allow humans to transform raw material and energy into products that are useful to society, at an industrial level. By taking advantage of the driving forces of nature such as pressure, temperature and concentration gradients, as well as the law of conservation of mass, process engineers can develop methods to synthesize and purify large quantities of desired chemical products. Process engineering focuses on the design, operation, control, optimization and intensification of chemical, physical, and biological processes. Their work involves analyzing the chemical makeup of various ingredients and determining how they might react with one another. A process engineer can specialize in a number of areas, including the following:
-Agriculture processing
-Food and dairy production
-Beer and whiskey production
-Cosmetics production
-Pharmaceutical production
-Petrochemical manufacturing
-Mineral processing
-Printed circuit board production
Overview
Process engineering involves the utilization of multiple tools and methods. Depending on the exact nature of the system, processes need to be simulated and modeled using mathematics and computer science. Processes where phase change and phase equilibria are relevant require analysis using the principles and laws of thermodynamics to quantify changes in energy and efficiency. In contrast, processes that focus on the flow of material and energy as they approach equilibria are best analyzed using the disciplines of fluid mechanics and transport phenomena. Disciplines within the field of mechanics need to be applied in the presence of fluids or porous and dispersed media. Materials engineering principles also need to be applied, when relevant.
Manufacturing in the field of process engineering involves an implementation of process synthesis steps. Regardless of the exact tools required, process engineering is then formatted through the use of a process flow diagram (PFD) where material flow paths, storage equipment (such as tanks and silos), transformations (such as distillation columns, receiver/head tanks, mixing, separations, pumping, etc.) and flowrates are specified, as well as a list of all pipes and conveyors and their contents, material properties such as density, viscosity, particle-size distribution, flowrates, pressures, temperatures, and materials of construction for the piping and unit operations.
The process flow diagram is then used to develop a piping and instrumentation diagram (P&ID) which graphically displays the actual process occurring. P&ID are meant to be more complex and specific than a PFD. They represent a less muddled approach to the design. The P&ID is then used as a basis of design for developing the "system operation guide" or "functional design specification" which outlines the operation of the process. It guides the process through operation of machinery, safety in design, programming and effective communication between engineers.
From the P&ID, a proposed layout (general arrangement) of the process can be shown from an overhead view (plot plan) and a side view (elevation), and other engineering disciplines are involved such as civil engineers for site work (earth moving), foundation design, concrete slab design work, structural steel to support the equipment, etc. All previous work is directed toward defining the scope of the project, then developing a cost estimate to get the design installed, and a schedule to communicate the timing needs for engineering, procurement, fabrication, installation, commissioning, startup, and ongoing production of the process.
Depending on needed accuracy of the cost estimate and schedule that is required, several iterations of designs are generally provided to customers or stakeholders who feed back their requirements. The process engineer incorporates these additional instructions (scope revisions) into the overall design and additional cost estimates, and schedules are developed for funding approval. Following funding approval, the project is executed via project management.
Principal areas of focus in process engineering
Process engineering activities can be divided into the following disciplines:
Process design: synthesis of energy recovery networks, synthesis of distillation systems (azeotropic), synthesis of reactor networks, hierarchical decomposition flowsheets, superstructure optimization, design multiproduct batch plants, design of the production reactors for the production of plutonium, design of nuclear submarines.
Process control: model predictive control, controllability measures, robust control, nonlinear control, statistical process control, process monitoring, thermodynamics-based control, denoted by three essential items, a collection of measurements, method of taking measurements, and a system of controlling the desired measurement.
Process operations: scheduling process networks, multiperiod planning and optimization, data reconciliation, real-time optimization, flexibility measures, fault diagnosis.
Supporting tools: sequential modular simulation, equation-based process simulation, AI/expert systems, large-scale nonlinear programming (NLP), optimization of differential algebraic equations (DAEs), mixed-integer nonlinear programming (MINLP), global optimization, optimization under uncertainty, and quality function deployment (QFD).
Process Economics: This includes using simulation software such as ASPEN, Super-Pro to find out the break even point, net present value, marginal sales, marginal cost, return on investment of the industrial plant after the analysis of the heat and mass transfer of the plant.
Process Data Analytics: Applying data analytics and machine learning methods for process manufacturing problems.
History of process engineering
Various chemical techniques have been used in industrial processes since time immemorial. However, it wasn't until the advent of thermodynamics and the law of conservation of mass in the 1780s that process engineering was properly developed and implemented as its own discipline. The set of knowledge that is now known as process engineering was then forged out of trial and error throughout the industrial revolution.
The term process, as it relates to industry and production, dates back to the 18th century. During this time period, demands for various products began to drastically increase, and process engineers were required to optimize the process in which these products were created.
By 1980, the concept of process engineering emerged from the fact that chemical engineering techniques and practices were being used in a variety of industries. By this time, process engineering had been defined as "the set of knowledge necessary to design, analyze, develop, construct, and operate, in an optimal way, the processes in which the material changes". By the end of the 20th century, process engineering had expanded from chemical engineering-based technologies to other applications, including metallurgical engineering, agricultural engineering, and product engineering.
See also
Chemical process modeling
Chemical technologist
Industrial engineering
Industrial process
Low-gravity process engineering
Materials science
Modular process skid
Process chemistry
Process flowsheeting
Process integration
Systems engineering process
References
External links
Advanced Process Engineering at Cranfield University (Cranfield, UK)
Sargent Centre for Process Systems Engineering (Imperial)
Process Systems Engineering at Cornell University (Ithaca, New York)
Department of Process Engineering at Stellenbosch University
Process Research and Intelligent Systems Modeling (PRISM) group at BYU
Process Systems Engineering at CMU
Process Systems Engineering Laboratory at RWTH Aachen
The Process Systems Engineering Laboratory (MIT)
Process Engineering Consulting at Canada
Process engineering
Engineering disciplines
Chemical processes | 0.791508 | 0.993616 | 0.786456 |
Quantum biology | Quantum biology is the study of applications of quantum mechanics and theoretical chemistry to aspects of biology that cannot be accurately described by the classical laws of physics. An understanding of fundamental quantum interactions is important because they determine the properties of the next level of organization in biological systems.
Many biological processes involve the conversion of energy into forms that are usable for chemical transformations, and are quantum mechanical in nature. Such processes involve chemical reactions, light absorption, formation of excited electronic states, transfer of excitation energy, and the transfer of electrons and protons (hydrogen ions) in chemical processes, such as photosynthesis, olfaction and cellular respiration. Moreover, quantum biology may use computations to model biological interactions in light of quantum mechanical effects. Quantum biology is concerned with the influence of non-trivial quantum phenomena, which can be explained by reducing the biological process to fundamental physics, although these effects are difficult to study and can be speculative.
Currently, there exist four major life processes that have been identified as influenced by quantum effects: enzyme catalysis, sensory processes, energy transference, and information encoding.
History
Quantum biology is an emerging field, in the sense that most current research is theoretical and subject to questions that require further experimentation. Though the field has only recently received an influx of attention, it has been conceptualized by physicists throughout the 20th century. It has been suggested that quantum biology might play a critical role in the future of the medical world. Early pioneers of quantum physics saw applications of quantum mechanics in biological problems. Erwin Schrödinger's 1944 book What Is Life? discussed applications of quantum mechanics in biology. Schrödinger introduced the idea of an "aperiodic crystal" that contained genetic information in its configuration of covalent chemical bonds. He further suggested that mutations are introduced by "quantum leaps". Other pioneers Niels Bohr, Pascual Jordan, and Max Delbrück argued that the quantum idea of complementarity was fundamental to the life sciences. In 1963, Per-Olov Löwdin published proton tunneling as another mechanism for DNA mutation. In his paper, he stated that there is a new field of study called "quantum biology". In 1979, the Soviet and Ukrainian physicist Alexander Davydov published the first textbook on quantum biology entitled Biology and Quantum Mechanics.
Enzyme catalysis
Enzymes have been postulated to use quantum tunneling to transfer electrons in electron transport chains. It is possible that protein quaternary architectures may have adapted to enable sustained quantum entanglement and coherence, which are two of the limiting factors for quantum tunneling in biological entities. These architectures might account for a greater percentage of quantum energy transfer, which occurs through electron transport and proton tunneling (usually in the form of hydrogen ions, H+). Tunneling refers to the ability of a subatomic particle to travel through potential energy barriers. This ability is due, in part, to the principle of complementarity, which holds that certain substances have pairs of properties that cannot be measured separately without changing the outcome of measurement. Particles, such as electrons and protons, have wave-particle duality; they can pass through energy barriers due to their wave characteristics without violating the laws of physics. In order to quantify how quantum tunneling is used in many enzymatic activities, many biophysicists utilize the observation of hydrogen ions. When hydrogen ions are transferred, this is seen as a staple in an organelle's primary energy processing network; in other words, quantum effects are most usually at work in proton distribution sites at distances on the order of an angstrom (1 Å). In physics, a semiclassical (SC) approach is most useful in defining this process because of the transfer from quantum elements (e.g. particles) to macroscopic phenomena (e.g. biochemicals). Aside from hydrogen tunneling, studies also show that electron transfer between redox centers through quantum tunneling plays an important role in enzymatic activity of photosynthesis and cellular respiration (see also Mitochondria section below).
Ferritin
Ferritin is an iron storage protein that is found in plants and animals. It is usually formed from 24 subunits that self-assemble into a spherical shell that is approximately 2 nm thick, with an outer diameter that varies with iron loading up to about 16 nm. Up to ~4500 iron atoms can be stored inside the core of the shell in the Fe3+ oxidation state as water-insoluble compounds such as ferrihydrite and magnetite. Ferritin is able to store electrons for at least several hours, which reduce the Fe3+ to water soluble Fe2+. Electron tunneling as the mechanism by which electrons transit the 2 nm thick protein shell was proposed as early as 1988. Electron tunneling and other quantum mechanical properties of ferritin were observed in 1992, and electron tunneling at room temperature and ambient conditions was observed in 2005. Electron tunneling associated with ferritin is a quantum biological process, and ferritin is a quantum biological agent.
Electron tunneling through ferritin between electrodes is independent of temperature, which indicates that it is substantially coherent and activation-less. The electron tunneling distance is a function of the size of the ferritin. Single electron tunneling events can occur over distances of up to 8 nm through the ferritin, and sequential electron tunneling can occur up to 12 nm through the ferritin. It has been proposed that the electron tunneling is magnon-assisted and associated with magnetite microdomains in the ferritin core.
Early evidence of quantum mechanical properties exhibited by ferritin in vivo was reported in 2004, where increased magnetic ordering of ferritin structures in placental macrophages was observed using small angle neutron scattering (SANS). Quantum dot solids also show increased magnetic ordering in SANS testing, and can conduct electrons over long distances. Increased magnetic ordering of ferritin cores disposed in an ordered layer on a silicon substrate with SANS testing has also been observed. Ferritin structures like those in placental macrophages have been tested in solid state configurations and exhibit quantum dot solid-like properties of conducting electrons over distances of up to 80 microns through sequential tunneling and formation of Coulomb blockades. Electron transport through ferritin in placental macrophages may be associated with an anti-inflammatory function.
Conductive atomic force microscopy of substantia nigra pars compacta (SNc) tissue demonstrated evidence of electron tunneling between ferritin cores, in structures that correlate to layers of ferritin outside of neuromelanin organelles.
Evidence of ferritin layers in cell bodies of large dopamine neurons of the SNc and between those cell bodies in glial cells has also been found, and is hypothesized to be associated with neuron function. Overexpression of ferritin reduces the accumulation of reactive oxygen species (ROS), and may act as a catalyst by increasing the ability of electrons from antioxidants to neutralize ROS through electron tunneling. Ferritin has also been observed in ordered configurations in lysosomes associated with erythropoiesis, where it may be associated with red blood cell production. While direct evidence of tunneling associated with ferritin in vivo in live cells has not yet been obtained, it may be possible to do so using QDs tagged with anti-ferritin, which should emit photons if electrons stored in the ferritin core tunnel to the QD.
Sensory processes
Olfaction
Olfaction, the sense of smell, can be broken down into two parts; the reception and detection of a chemical, and how that detection is sent to and processed by the brain. This process of detecting an odorant is still under question. One theory named the "shape theory of olfaction" suggests that certain olfactory receptors are triggered by certain shapes of chemicals and those receptors send a specific message to the brain. Another theory (based on quantum phenomena) suggests that the olfactory receptors detect the vibration of the molecules that reach them and the "smell" is due to different vibrational frequencies, this theory is aptly called the "vibration theory of olfaction."
The vibration theory of olfaction, created in 1938 by Malcolm Dyson but reinvigorated by Luca Turin in 1996, proposes that the mechanism for the sense of smell is due to G-protein receptors that detect molecular vibrations due to inelastic electron tunneling, tunneling where the electron loses energy, across molecules. In this process a molecule would fill a binding site with a G-protein receptor. After the binding of the chemical to the receptor, the chemical would then act as a bridge allowing for the electron to be transferred through the protein. As the electron transfers across what would otherwise have been a barrier, it loses energy due to the vibration of the newly-bound molecule to the receptor. This results in the ability to smell the molecule.
While the vibration theory has some experimental proof of concept, there have been multiple controversial results in experiments. In some experiments, animals are able to distinguish smells between molecules of different frequencies and same structure, while other experiments show that people are unaware of distinguishing smells due to distinct molecular frequencies.
Vision
Vision relies on quantized energy in order to convert light signals to an action potential in a process called phototransduction. In phototransduction, a photon interacts with a chromophore in a light receptor. The chromophore absorbs the photon and undergoes photoisomerization. This change in structure induces a change in the structure of the photo receptor and resulting signal transduction pathways lead to a visual signal. However, the photoisomerization reaction occurs at a rapid rate, in under 200 femtoseconds, with high yield. Models suggest the use of quantum effects in shaping the ground state and excited state potentials in order to achieve this efficiency.
The sensor in the retina of the human eye is sensitive enough to detect a single photon. Single photon detection could lead to multiple different technologies. One area of development is in quantum communication and cryptography. The idea is to use a biometric system to measure the eye using only a small number of points across the retina with random flashes of photons that "read" the retina and identify the individual. This biometric system would only allow a certain individual with a specific retinal map to decode the message. This message can not be decoded by anyone else unless the eavesdropper were to guess the proper map or could read the retina of the intended recipient of the message.
Energy transfer
Photosynthesis
Photosynthesis refers to the biological process that photosynthetic cells use to synthesize organic compounds from inorganic starting materials using sunlight. What has been primarily implicated as exhibiting non-trivial quantum behaviors is the light reaction stage of photosynthesis. In this stage, photons are absorbed by the membrane-bound photosystems. Photosystems contain two major domains, the light-harvesting complex (antennae) and the reaction center. These antennae vary among organisms. For example, bacteria use circular aggregates of chlorophyll pigments, while plants use membrane-embedded protein and chlorophyll complexes. Regardless, photons are first captured by the antennae and passed on to the reaction-center complex. Various pigment-protein complexes, such as the FMO complex in green sulfur bacteria, are responsible for transferring energy from antennae to reaction site. The photon-driven excitation of the reaction-center complex mediates the oxidation and the reduction of the primary electron acceptor, a component of the reaction-center complex. Much like the electron transport chain of the mitochondria, a linear series of oxidations and reductions drives proton (H+) pumping across the thylakoid membrane, the development of a proton motive force, and energetic coupling to the synthesis of ATP.
Previous understandings of electron-excitation transference (EET) from light-harvesting antennae to the reaction center have relied on the Förster theory of incoherent EET, postulating weak electron coupling between chromophores and incoherent hopping from one to another. This theory has largely been disproven by FT electron spectroscopy experiments that show electron absorption and transfer with an efficiency of above 99%, which cannot be explained by classical mechanical models. Instead, as early as 1938, scientists theorized that quantum coherence was the mechanism for excitation-energy transfer. Indeed, the structure and nature of the photosystem places it in the quantum realm, with EET ranging from the femto- to nanosecond scale, covering sub-nanometer to nanometer distances. The effects of quantum coherence on EET in photosynthesis are best understood through state and process coherence. State coherence refers to the extent of individual superpositions of ground and excited states for quantum entities, such as excitons. Process coherence, on the other hand, refers to the degree of coupling between multiple quantum entities and their evolution as either dominated by unitary or dissipative parts, which compete with one another. Both of these types of coherence are implicated in photosynthetic EET, where a exciton is coherently delocalized over several chromophores. This delocalization allows for the system to simultaneously explore several energy paths and use constructive and destructive interference to guide the path of the exciton's wave packet. It is presumed that natural selection has favored the most efficient path to the reaction center. Experimentally, the interaction between the different frequency wave packets, made possible by long-lived coherence, will produce quantum beats.
While quantum photosynthesis is still an emerging field, there have been many experimental results that support the quantum-coherence understanding of photosynthetic EET. A 2007 study claimed the identification of electronic quantum coherence at −196 °C (77 K). Another theoretical study from 2010 provided evidence that quantum coherence lives as long as 300 femtoseconds at biologically relevant temperatures (4 °C or 277 K). In that same year, experiments conducted on photosynthetic cryptophyte algae using two-dimensional photon echo spectroscopy yielded further confirmation for long-term quantum coherence. These studies suggest that, through evolution, nature has developed a way of protecting quantum coherence to enhance the efficiency of photosynthesis. However, critical follow-up studies question the interpretation of these results. Single-molecule spectroscopy now shows the quantum characteristics of photosynthesis without the interference of static disorder, and some studies use this method to assign reported signatures of electronic quantum coherence to nuclear dynamics occurring in chromophores. A number of proposals emerged to explain unexpectedly long coherence. According to one proposal, if each site within the complex feels its own environmental noise, the electron will not remain in any local minimum due to both quantum coherence and its thermal environment, but proceed to the reaction site via quantum walks. Another proposal is that the rate of quantum coherence and electron tunneling create an energy sink that moves the electron to the reaction site quickly. Other work suggested that geometric symmetries in the complex may favor efficient energy transfer to the reaction center, mirroring perfect state transfer in quantum networks. Furthermore, experiments with artificial dye molecules cast doubts on the interpretation that quantum effects last any longer than one hundred femtoseconds.
In 2017, the first control experiment with the original FMO protein under ambient conditions confirmed that electronic quantum effects are washed out within 60 femtoseconds, while the overall exciton transfer takes a time on the order of a few picoseconds. In 2020 a review based on a wide collection of control experiments and theory concluded that the proposed quantum effects as long lived electronic coherences in the FMO system does not hold. Instead, research investigating transport dynamics suggests that interactions between electronic and vibrational modes of excitation in FMO complexes require a semi-classical, semi-quantum explanation for the transfer of exciton energy. In other words, while quantum coherence dominates in the short-term, a classical description is most accurate to describe long-term behavior of the excitons.
Another process in photosynthesis that has almost 100% efficiency is charge transfer, again suggesting that quantum mechanical phenomena are at play. In 1966, a study on the photosynthetic bacterium Chromatium found that at temperatures below 100 K, cytochrome oxidation is temperature-independent, slow (on the order of milliseconds), and very low in activation energy. The authors, Don DeVault and Britton Chase, postulated that these characteristics of electron transfer are indicative of quantum tunneling, whereby electrons penetrate a potential barrier despite possessing less energy than is classically necessary.
Mitochondria
Mitochondria have been demonstrated to utilize quantum tunneling in their function as the powerhouse of eukaryotic cells. Similar to the light reactions in the thylakoid, linearly-associated membrane-bound proteins comprising the electron transport chain (ETC) energetically link the reduction of O2 with the development of a proton motive gradient (H+) across the inner membrane of the mitochondria. This energy stored as a proton motive gradient is then coupled with the synthesis of ATP. It is significant that the mitochondrion conversion of biomass into chemical ATP achieves 60-70% thermodynamic efficiency, far superior to that of man-made engines. This high degree of efficiency is largely attributed to the quantum tunnelling of electrons in the ETC and of protons in the proton motive gradient. Indeed, electron tunneling has already been demonstrated in certain elements of the ETC including NADH:ubiquinone oxidoreductase(Complex I) and CoQH2-cytochrome c reductase (Complex III).
In quantum mechanics, both electrons and protons are quantum entities that exhibit wave-particle duality, exhibiting both particle and wave-like properties depending on the method of experimental observation. Quantum tunneling is a direct consequence of this wave-like nature of quantum entities that permits the passing-through of a potential energy barrier that would otherwise restrict the entity. Moreover, it depends on the shape and size of a potential barrier relative to the incoming energy of a particle. Because the incoming particle is defined by its wave function, its tunneling probability is dependent upon the potential barrier's shape in an exponential way. For example, if the barrier is relatively wide, the incoming particle's probability to tunnel will decrease. The potential barrier, in some sense, can come in the form of an actual biomaterial barrier. The inner mitochondria membrane which houses the various components of the ETC is on the order of 7.5 nm thick. The inner membrane of a mitochondrion must be overcome to permit signals (in the form of electrons, protons, H+) to transfer from the site of emittance (internal to the mitochondria) and the site of acceptance (i.e. the electron transport chain proteins). In order to transfer particles, the membrane of the mitochondria must have the correct density of phospholipids to conduct a relevant charge distribution that attracts the particle in question. For instance, for a greater density of phospholipids, the membrane contributes to a greater conductance of protons.
Molecular solitons in proteins
Alexander Davydov developed the quantum theory of molecular solitons in order to explain the transport of energy in protein α-helices in general and the physiology of muscle contraction in particular. He showed that the molecular solitons are able to preserve their shape through nonlinear interaction of amide I excitons and phonon deformations inside the lattice of hydrogen-bonded peptide groups. In 1979, Davydov published his complete textbook on quantum biology entitled "Biology and Quantum Mechanics" featuring quantum dynamics of proteins, cell membranes, bioenergetics, muscle contraction, and electron transport in biomolecules.
Information encoding
Magnetoreception
Magnetoreception is the ability of animals to navigate using the inclination of the magnetic field of the Earth. A possible explanation for magnetoreception is the entangled radical pair mechanism. The radical-pair mechanism is well-established in spin chemistry, and was speculated to apply to magnetoreception in 1978 by Schulten et al.. The ratio between singlet and triplet pairs is changed by the interaction of entangled electron pairs with the magnetic field of the Earth. In 2000, cryptochrome was proposed as the "magnetic molecule" that could harbor magnetically sensitive radical-pairs. Cryptochrome, a flavoprotein found in the eyes of European robins and other animal species, is the only protein known to form photoinduced radical-pairs in animals. When it interacts with light particles, cryptochrome goes through a redox reaction, which yields radical pairs both during the photo-reduction and the oxidation. The function of cryptochrome is diverse across species, however, the photoinduction of radical-pairs occurs by exposure to blue light, which excites an electron in a chromophore. Magnetoreception is also possible in the dark, so the mechanism must rely more on the radical pairs generated during light-independent oxidation.
Experiments in the lab support the basic theory that radical-pair electrons can be significantly influenced by very weak magnetic fields, i.e., merely the direction of weak magnetic fields can affect radical-pair's reactivity and therefore can "catalyze" the formation of chemical products. Whether this mechanism applies to magnetoreception and/or quantum biology, that is, whether Earth's magnetic field "catalyzes" the formation of biochemical products by the aid of radical-pairs, is not fully clear. Radical-pairs may need not be entangled, the key quantum feature of the radical-pair mechanism, to play a part in these processes. There are entangled and non-entangled radical-pairs, but disturbing only entangled radical-pairs is not possible with current technology. Researchers found evidence for the radical-pair mechanism of magnetoreception when European robins, cockroaches, and garden warblers, could no longer navigate when exposed to a radio frequency that obstructs magnetic fields and radical-pair chemistry. Further evidence came from a comparison of Cryptochrome 4 (CRY4) from migrating and non-migrating birds. CRY4 from chicken and pigeon were found to be less sensitive to magnetic fields than those from the (migrating) European robin, suggesting evolutionary optimization of this protein as a sensor of magnetic fields.
DNA mutation
DNA acts as the instructions for making proteins throughout the body. It consists of 4 nucleotides: guanine, thymine, cytosine, and adenine. The order of these nucleotides gives the "recipe" for the different proteins.
Whenever a cell reproduces, it must copy these strands of DNA. However, sometime throughout the process of copying the strand of DNA a mutation, or an error in the DNA code, can occur. A theory for the reasoning behind DNA mutation is explained in the Lowdin DNA mutation model. In this model, a nucleotide may spontaneously change its form through a process of quantum tunneling. Because of this, the changed nucleotide will lose its ability to pair with its original base pair and consequently change the structure and order of the DNA strand.
Exposure to ultraviolet light and other types of radiation can cause DNA mutation and damage. The radiation also can modify the bonds along the DNA strand in the pyrimidines and cause them to bond with themselves, creating a dimer.
In many prokaryotes and plants, these bonds are repaired by a DNA-repair-enzyme photolyase. As its prefix implies, photolyase is reliant on light in order to repair the strand. Photolyase works with its cofactor FADH, flavin adenine dinucleotide, while repairing the DNA. Photolyase is excited by visible light and transfers an electron to the cofactor FADH. FADH—now in the possession of an extra electron—transfers the electron to the dimer to break the bond and repair the DNA. The electron tunnels from the FADH to the dimer. Although the range of this tunneling is much larger than feasible in a vacuum, the tunneling in this scenario is said to be "superexchange-mediated tunneling," and is possible due to the protein's ability to boost the tunneling rates of the electron.
Other
Other quantum phenomena in biological systems include the conversion of chemical energy into motion and brownian motors in many cellular processes.
Pseudoscience
Alongside the multiple strands of scientific inquiry into quantum mechanics has come unconnected pseudoscientific interest; this caused scientists to approach quantum biology cautiously.
Hypotheses such as orchestrated objective reduction which postulate a link between quantum mechanics and consciousness have drawn criticism from the scientific community with some claiming it to be pseudoscientific and "an excuse for quackery".
References
External links
Philip Ball (2015). "Quantum Biology: An Introduction". The Royal Institution
Quantum Biology and the Hidden Nature of Nature, World Science Festival 2012, video of podium discussion
Quantum Biology: Current Status and Opportunities, September 17-18, 2012, University of Surrey, UK
Biophysics | 0.792495 | 0.992349 | 0.786431 |
Polyacrylamide gel electrophoresis | Polyacrylamide gel electrophoresis (PAGE) is a technique widely used in biochemistry, forensic chemistry, genetics, molecular biology and biotechnology to separate biological macromolecules, usually proteins or nucleic acids, according to their electrophoretic mobility. Electrophoretic mobility is a function of the length, conformation, and charge of the molecule. Polyacrylamide gel electrophoresis is a powerful tool used to analyze RNA samples. When polyacrylamide gel is denatured after electrophoresis, it provides information on the sample composition of the RNA species.
Hydration of acrylonitrile results in formation of acrylamide molecules by nitrile hydratase. Acrylamide monomer is in a powder state before addition of water. Acrylamide is toxic to the human nervous system, therefore all safety measures must be followed when working with it. Acrylamide is soluble in water and upon addition of free-radical initiators it polymerizes resulting in formation of polyacrylamide. It is useful to make polyacrylamide gel via acrylamide hydration because pore size can be regulated. Increased concentrations of acrylamide result in decreased pore size after polymerization. Polyacrylamide gel with small pores helps to examine smaller molecules better since the small molecules can enter the pores and travel through the gel while large molecules get trapped at the pore openings.
As with all forms of gel electrophoresis, molecules may be run in their native state, preserving the molecules' higher-order structure. This method is called native-PAGE. Alternatively, a chemical denaturant may be added to remove this structure and turn the molecule into an unstructured molecule whose mobility depends only on its length (because the protein-SDS complexes all have a similar mass-to-charge ratio). This procedure is called SDS-PAGE. Sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) is a method of separating molecules based on the difference of their molecular weight. At the pH at which gel electrophoresis is carried out the SDS molecules are negatively charged and bind to proteins in a set ratio, approximately one molecule of SDS for every 2 amino acids. In this way, the detergent provides all proteins with a uniform charge-to-mass ratio. By binding to the proteins the detergent destroys their secondary, tertiary and/or quaternary structure denaturing them and turning them into negatively charged linear polypeptide chains. When subjected to an electric field in PAGE, the negatively charged polypeptide chains travel toward the anode with different mobility. Their mobility, or the distance traveled by molecules, is inversely proportional to the logarithm of their molecular weight. By comparing the relative ratio of the distance traveled by each protein to the length of the gel (Rf) one can make conclusions about the relative molecular weight of the proteins, where the length of the gel is determined by the distance traveled by a small molecule like a tracking dye.
For nucleic acids, urea is the most commonly used denaturant. For proteins, sodium dodecyl sulfate (SDS) is an anionic detergent applied to protein samples to coat proteins in order to impart two negative charges (from every SDS molecule) to every two amino acids of the denatured protein. 2-Mercaptoethanol may also be used to disrupt the disulfide bonds found between the protein complexes, which helps further denature the protein. In most proteins, the binding of SDS to the polypeptide chains impart an even distribution of charge per unit mass, thereby resulting in a fractionation by approximate size during electrophoresis. Proteins that have a greater hydrophobic content – for instance, many membrane proteins, and those that interact with surfactants in their native environment – are intrinsically harder to treat accurately using this method, due to the greater variability in the ratio of bound SDS. Procedurally, using both Native and SDS-PAGE together can be used to purify and to separate the various subunits of the protein. Native-PAGE keeps the oligomeric form intact and will show a band on the gel that is representative of the level of activity. SDS-PAGE will denature and separate the oligomeric form into its monomers, showing bands that are representative of their molecular weights. These bands can be used to identify and assess the purity of the protein.
Procedure
Sample preparation
Samples may be any material containing proteins or nucleic acids. These may be biologically derived, for example from prokaryotic or eukaryotic cells, tissues, viruses, environmental samples, or purified proteins. In the case of solid tissues or cells, these are often first broken down mechanically using a blender (for larger sample volumes), using a homogenizer (smaller volumes), by sonicator or by using cycling of high pressure, and a combination of biochemical and mechanical techniques – including various types of filtration and centrifugation – may be used to separate different cell compartments and organelles prior to electrophoresis. Synthetic biomolecules such as oligonucleotides may also be used as analytes.
The sample to analyze is optionally mixed with a chemical denaturant if so desired, usually SDS for proteins or urea for nucleic acids. SDS is an anionic detergent that denatures secondary and non–disulfide–linked tertiary structures, and additionally applies a negative charge to each protein in proportion to its mass. Urea breaks the hydrogen bonds between the base pairs of the nucleic acid, causing the constituent strands to separate. Heating the samples to at least 60 °C further promotes denaturation.
In addition to SDS, proteins may optionally be briefly heated to near boiling in the presence of a reducing agent, such as dithiothreitol (DTT) or 2-mercaptoethanol (beta-mercaptoethanol/BME), which further denatures the proteins by reducing disulfide linkages, thus overcoming some forms of tertiary protein folding, and breaking up quaternary protein structure (oligomeric subunits). This is known as reducing SDS-PAGE.
A tracking dye may be added to the solution. This typically has a higher electrophoretic mobility than the analytes to allow the experimenter to track the progress of the solution through the gel during the electrophoretic run.
Preparing acrylamide gels
The gels typically consist of acrylamide, bisacrylamide, the optional denaturant (SDS or urea), and a buffer with an adjusted pH. The solution may be degassed under a vacuum to prevent the formation of air bubbles during polymerization. Alternatively, butanol may be added to the resolving gel (for proteins) after it is poured, as butanol removes bubbles and makes the surface smooth.
A source of free radicals and a stabilizer, such as ammonium persulfate and TEMED are added to initiate polymerization. The polymerization reaction creates a gel because of the added bisacrylamide, which can form cross-links between two acrylamide molecules. The ratio of bisacrylamide to acrylamide can be varied for special purposes, but is generally about 1 part in 35. The acrylamide concentration of the gel can also be varied, generally in the range from 5% to 25%. Lower percentage gels are better for resolving very high molecular weight molecules, while much higher percentages of acrylamide are needed to resolve smaller proteins. The average pore diameter of polyacrylamide gels is determined by the total concentration of acrylamides (% T with T = Total concentration of acrylamide and bisacrylamide) and the concentration of the cross-linker bisacrylamide (%C with C = bisacrylamide concentration). The pore size is reduced reciprocally to the %T. Concerning %C, a concentration of 5% produces the smallest pores, since the influence of bisacrylamide on the pore size has a parabola-shape with a vertex at 5%.
Gels are usually polymerized between two glass plates in a gel caster, with a comb inserted at the top to create the sample wells. After the gel is polymerized the comb can be removed and the gel is ready for electrophoresis.
Electrophoresis
Various buffer systems are used in PAGE depending on the nature of the sample and the experimental objective. The buffers used at the anode and cathode may be the same or different.
An electric field is applied across the gel, causing the negatively charged proteins or nucleic acids to migrate across the gel away from the negative electrode (which is the cathode being that this is an electrolytic rather than galvanic cell) and towards the positive electrode (the anode). Depending on their size, each biomolecule moves differently through the gel matrix: small molecules more easily fit through the pores in the gel, while larger ones have more difficulty. The gel is run usually for a few hours, though this depends on the voltage applied across the gel; migration occurs more quickly at higher voltages, but these results are typically less accurate than at those at lower voltages. After the set amount of time, the biomolecules have migrated different distances based on their size. Smaller biomolecules travel farther down the gel, while larger ones remain closer to the point of origin. Biomolecules may therefore be separated roughly according to size, which depends mainly on molecular weight under denaturing conditions, but also depends on higher-order conformation under native conditions. The gel mobility is defined as the rate of migration traveled with a voltage gradient of 1V/cm and has units of cm2/sec/V. For analytical purposes, the relative mobility of biomolecules, Rf, the ratio of the distance the molecule traveled on the gel to the total travel distance of a tracking dye is plotted versus the molecular weight of the molecule (or sometimes the log of MW, or rather the Mr, molecular radius). Such typically linear plots represent the standard markers or calibration curves that are widely used for the quantitative estimation of a variety of biomolecular sizes.
Certain glycoproteins, however, behave anomalously on SDS gels. Additionally, the analysis of larger proteins ranging from 250,000 to 600,000 Da is also reported to be problematic due to the fact that such polypeptides move improperly in the normally used gel systems.
Further processing
Following electrophoresis, the gel may be stained (for proteins, most commonly with Coomassie brilliant blue R-250 or autoradiography; for nucleic acids, ethidium bromide; or for either, silver stain), allowing visualization of the separated proteins, or processed further (e.g. Western blot). After staining, different species biomolecules appear as distinct bands within the gel. It is common to run molecular weight size markers of known molecular weight in a separate lane in the gel to calibrate the gel and determine the approximate molecular mass of unknown biomolecules by comparing the distance traveled relative to the marker.
For proteins, SDS-PAGE is usually the first choice as an assay of purity due to its reliability and ease. The presence of SDS and the denaturing step make proteins separate, approximately based on size, but aberrant migration of some proteins may occur. Different proteins may also stain differently, which interferes with quantification by staining. PAGE may also be used as a preparative technique for the purification of proteins. For example, preparative native PAGE is a method for separating native metalloproteins in complex biological matrices.
Chemical ingredients and their roles
Polyacrylamide gel (PAG) had been known as a potential embedding medium for sectioning tissues as early as 1964, and two independent groups employed PAG in electrophoresis in 1959. It possesses several electrophoretically desirable features that make it a versatile medium. It is a synthetic, thermo-stable, transparent, strong, chemically relatively inert gel, and can be prepared with a wide range of average pore sizes. The pore size of a gel and the reproducibility in gel pore size are determined by three factors, the total amount of acrylamide present (%T) (T = Total concentration of acrylamide and bisacrylamide monomer), the amount of cross-linker (%C) (C = bisacrylamide concentration), and the time of polymerization of acrylamide (cf. QPNC-PAGE). Pore size decreases with increasing %T; with cross-linking, 5%C gives the smallest pore size. Any increase or decrease in %C from 5% increases the pore size, as pore size with respect to %C is a parabolic function with vertex as 5%C. This appears to be because of non-homogeneous bundling of polymer strands within the gel. This gel material can also withstand high voltage gradients, is amenable to various staining and destaining procedures, and can be digested to extract separated fractions or dried for autoradiography and permanent recording.
Components
Polyacrylamide gels are composed of a stacking gel and separating gel. Stacking gels have a higher porosity relative to the separating gel, and allow for proteins to migrate in a concentrated area. Additionally, stacking gels usually have a pH of 6.8, since the neutral glycine molecules allow for faster protein mobility. Separating gels have a pH of 8.8, where the anionic glycine slows down the mobility of proteins. Separating gels allow for the separation of proteins and have a relatively lower porosity. Here, the proteins are separated based on size (in SDS-PAGE) and size/ charge (Native PAGE).
Chemical buffer stabilizes the pH value to the desired value within the gel itself and in the electrophoresis buffer. The choice of buffer also affects the electrophoretic mobility of the buffer counterions and thereby the resolution of the gel. The buffer should also be unreactive and not modify or react with most proteins. Different buffers may be used as cathode and anode buffers, respectively, depending on the application. Multiple pH values may be used within a single gel, for example in DISC electrophoresis. Common buffers in PAGE include Tris, Bis-Tris, or imidazole.
Counterion balance the intrinsic charge of the buffer ion and also affect the electric field strength during electrophoresis. Highly charged and mobile ions are often avoided in SDS-PAGE cathode buffers, but may be included in the gel itself, where it migrates ahead of the protein. In applications such as DISC SDS-PAGE the pH values within the gel may vary to change the average charge of the counterions during the run to improve resolution. Popular counterions are glycine and tricine. Glycine has been used as the source of trailing ion or slow ion because its pKa is 9.69 and mobility of glycinate are such that the effective mobility can be set at a value below that of the slowest known proteins of net negative charge in the pH range. The minimum pH of this range is approximately 8.0.
Acrylamide (; mW: 71.08) when dissolved in water, slow, spontaneous autopolymerization of acrylamide takes place, joining molecules together by head on tail fashion to form long single-chain polymers. The presence of a free radical-generating system greatly accelerates polymerization. This kind of reaction is known as vinyl addition polymerisation. A solution of these polymer chains becomes viscous but does not form a gel, because the chains simply slide over one another. Gel formation requires linking various chains together. Acrylamide is carcinogenic, a neurotoxin, and a reproductive toxin. It is also essential to store acrylamide in a cool dark and dry place to reduce autopolymerisation and hydrolysis.
Bisacrylamide (N,N′-Methylenebisacrylamide) (; mW: 154.17) is the most frequently used cross linking agent for polyacrylamide gels. Chemically it can be thought of as two acrylamide molecules coupled head to head at their non-reactive ends. Bisacrylamide can crosslink two polyacrylamide chains to one another, thereby resulting in a gel.
Sodium dodecyl sulfate (SDS) (; mW: 288.38) (only used in denaturing protein gels) is a strong detergent agent used to denature native proteins to individual polypeptides. This denaturation, which is referred to as reconstructive denaturation, is not accomplished by the total linearization of the protein, but instead, through a conformational change to a combination of random coil and α helix secondary structures. When a protein mixture is heated to 100 °C in presence of SDS, the detergent wraps around the polypeptide backbone. It binds to polypeptides in a constant weight ratio of 1.4 g SDS/g of polypeptide. In this process, the intrinsic charges of polypeptides become negligible when compared to the negative charges contributed by SDS. Thus polypeptides after treatment become rod-like structures possessing a uniform charge density, that is same net negative charge per unit weight. The electrophoretic mobilities of these proteins is a linear function of the logarithms of their molecular weights. Without SDS, different proteins with similar molecular weights would migrate differently due to differences in mass-charge ratio, as each protein has an isoelectric point and molecular weight particular to its primary structure. This is known as native PAGE. Adding SDS solves this problem, as it binds to and unfolds the protein, giving a near uniform negative charge along the length of the polypeptide.
Urea (; mW: 60.06) is a chaotropic agent that increases the entropy of the system by interfering with intramolecular interactions mediated by non-covalent forces such as hydrogen bonds and van der Waals forces. Macromolecular structure is dependent on the net effect of these forces, therefore it follows that an increase in chaotropic solutes denatures macromolecules,
Ammonium persulfate (APS) (; mW: 228.2) is a source of free radicals and is often used as an initiator for gel formation. An alternative source of free radicals is riboflavin, which generated free radicals in a photochemical reaction.
TEMED (N, N, N′, N′-tetramethylethylenediamine) (; mW: 116.21) stabilizes free radicals and improves polymerization. The rate of polymerisation and the properties of the resulting gel depend on the concentrations of free radicals. Increasing the amount of free radicals results in a decrease in the average polymer chain length, an increase in gel turbidity and a decrease in gel elasticity. Decreasing the amount shows the reverse effect. The lowest catalytic concentrations that allow polymerisation in a reasonable period of time should be used. APS and TEMED are typically used at approximately equimolar concentrations in the range of 1 to 10 mM.
Chemicals for processing and visualization
The following chemicals and procedures are used for processing of the gel and the protein samples visualized in it.
Tracking dye; as proteins and nucleic acids are mostly colorless, their progress through the gel during electrophoresis cannot be easily followed. Anionic dyes of a known electrophoretic mobility are therefore usually included in the PAGE sample buffer. A very common tracking dye is Bromophenol blue (BPB, 3',3",5',5" tetrabromophenolsulfonphthalein). This dye is coloured at alkali and neutral pH and is a small negatively charged molecule that moves towards the anode. Being a highly mobile molecule it moves ahead of most proteins. As it reaches the anodic end of the electrophoresis medium electrophoresis is stopped. It can weakly bind to some proteins and impart a blue colour. Other common tracking dyes are xylene cyanol, which has lower mobility, and Orange G, which has a higher mobility.
Loading aids; most PAGE systems are loaded from the top into wells within the gel. To ensure that the sample sinks to the bottom of the gel, sample buffer is supplemented with additives that increase the density of the sample. These additives should be non-ionic and non-reactive towards proteins to avoid interfering with electrophoresis. Common additives are glycerol and sucrose.
Coomassie brilliant blue R-250 (CBB)(; mW: 825.97) is the most popular protein stain. It is an anionic dye, which non-specifically binds to proteins. The structure of CBB is predominantly non-polar, and it is usually used in methanolic solution acidified with acetic acid. Proteins in the gel are fixed by acetic acid and simultaneously stained. The excess dye incorporated into the gel can be removed by destaining with the same solution without the dye. The proteins are detected as blue bands on a clear background. As SDS is also anionic, it may interfere with staining process. Therefore, large volume of staining solution is recommended, at least ten times the volume of the gel.
Ethidium bromide (EtBr) is a popular nucleic acid stain. EtBr allows one to easily visualize DNA or RNA on a gel as EtBr fluoresces an orange color under UV light. Ethidium bromide binds nucleic acid chains through the process of Intercalation. While Ethidium bromide is a popular stain it is important to exercise caution when using EtBr as it is a known carcinogen. Because of this fact, many researchers opt to use stains such as SYBR Green and SYBR Safe which are safer alternatives to EtBr. EtBr is used by simply adding it to the gel mixture. Once the gel has run, the gel may be viewed through the use of a photo-documentation system.
Silver staining is used when more sensitive method for detection is needed, as classical Coomassie Brilliant Blue staining can usually detect a 50 ng protein band, Silver staining increases the sensitivity typically 10-100 fold more. This is based on the chemistry of photographic development. The proteins are fixed to the gel with a dilute methanol solution, then incubated with an acidic silver nitrate solution. Silver ions are reduced to their metallic form by formaldehyde at alkaline pH. An acidic solution, such as acetic acid stops development. Silver staining was introduced by Kerenyi and Gallyas as a sensitive procedure to detect trace amounts of proteins in gels. The technique has been extended to the study of other biological macromolecules that have been separated in a variety of supports. Many variables can influence the colour intensity and every protein has its own staining characteristics; clean glassware, pure reagents and water of highest purity are the key points to successful staining. Silver staining was developed in the 14th century for colouring the surface of glass. It has been used extensively for this purpose since the 16th century. The colour produced by the early silver stains ranged between light yellow and an orange-red. Camillo Golgi perfected the silver staining for the study of the nervous system. Golgi's method stains a limited number of cells at random in their entirety.
Autoradiography, also used for protein band detection post gel electrophoresis, uses radioactive isotopes to label proteins, which are then detected by using X-ray film.
Western blotting is a process by which proteins separated in the acrylamide gel are electrophoretically transferred to a stable, manipulable membrane such as a nitrocellulose, nylon, or PVDF membrane. It is then possible to apply immunochemical techniques to visualise the transferred proteins, as well as accurately identify relative increases or decreases of the protein of interest.
See also
Agarose gel electrophoresis
Capillary electrophoresis
DNA electrophoresis
Eastern blotting
Electroblotting
Fast parallel proteolysis (FASTpp)
History of electrophoresis
Isoelectric focusing
Isotachophoresis
Native gel electrophoresis
Northern blotting
Protein electrophoresis
QPNC-PAGE
Southern blotting
Two dimensional SDS-PAGE
Zymography
References
External links
SDS-PAGE: How it Works
Demystifying SDS-PAGE Video
Demystifying SDS-PAGE
SDS-PAGE Calculator for customised recipes for TRIS Urea gels.
2-Dimensional Protein Gelelectrophoresis
Hempelmann E. SDS-Protein PAGE and Proteindetection by Silverstaining and Immunoblotting of Plasmodium falciparum proteins. in: Moll K, Ljungström J, Perlmann H, Scherf A, Wahlgren M (eds) Methods in Malaria Research, 5th edition, 2008, 263-266
Molecular biology techniques
Electrophoresis | 0.796173 | 0.987735 | 0.786408 |
Chemical nomenclature | Chemical nomenclature is a set of rules to generate systematic names for chemical compounds. The nomenclature used most frequently worldwide is the one created and developed by the International Union of Pure and Applied Chemistry (IUPAC).
IUPAC Nomenclature ensures that each compound (and its various isomers) have only one formally accepted name known as the systematic IUPAC name, however, some compounds may have alternative names that are also accepted, known as the preferred IUPAC name which is generally taken from the common name of that compound. Preferably, the name should also represent the structure or chemistry of a compound.
For example, the main constituent of white vinegar is , which is commonly called acetic acid and is also its recommended IUPAC name, but its formal, systematic IUPAC name is ethanoic acid.
The IUPAC's rules for naming organic and inorganic compounds are contained in two publications, known as the Blue Book and the Red Book, respectively. A third publication, known as the Green Book, recommends the use of symbols for physical quantities (in association with the IUPAP), while a fourth, the Gold Book, defines many technical terms used in chemistry. Similar compendia exist for biochemistry (the White Book, in association with the IUBMB), analytical chemistry (the Orange Book), macromolecular chemistry (the Purple Book), and clinical chemistry (the Silver Book). These "color books" are supplemented by specific recommendations published periodically in the journal Pure and Applied Chemistry.
Purpose of chemical nomenclature
The main purpose of chemical nomenclature is to disambiguate the spoken or written names of chemical compounds: each name should refer to one compound. Secondarily, each compound should have only one name, although in some cases some alternative names are accepted.
Preferably, the name should also represent the structure or chemistry of a compound. This is achieved by the International Chemical Identifier (InChI) nomenclature. However, the American Chemical Society's CAS numbers nomenclature does not represent a compound's structure.
The nomenclature used depends on the needs of the user, so no single correct nomenclature exists. Rather, different nomenclatures are appropriate for different circumstances.
A common name will successfully identify a chemical compound, given context. Without context, the name should indicate at least the chemical composition. To be more specific, the name may need to represent the three-dimensional arrangement of the atoms. This requires adding more rules to the standard IUPAC system (the Chemical Abstracts Service system (CAS system) is the one used most commonly in this context), at the expense of having names which are longer and less familiar.
The IUPAC system is often criticized for failing to distinguish relevant compounds (for example, for differing reactivity of sulfur allotropes, which IUPAC does not distinguish). While IUPAC has a human-readable advantage over CAS numbering, IUPAC names for some larger, relevant molecules (such as rapamycin) are barely human-readable, so common names are used instead.
Differing needs of chemical nomenclature and lexicography
It is generally understood that the purposes of lexicography versus chemical nomenclature vary and are to an extent at odds. Dictionaries of words, whether in traditional print or on the internet, collect and report the meanings of words as their uses appear and change over time. For internet dictionaries with limited or no formal editorial process, definitions —in this case, definitions of chemical names and terms— can change rapidly without concern for the formal or historical meanings. Chemical nomenclature however (with IUPAC nomenclature as the best example) is necessarily more restrictive: Its purpose is to standardize communication and practice so that, when a chemical term is used it has a fixed meaning relating to chemical structure, thereby giving insights into chemical properties and derived molecular functions. These differing purposes can affect understanding, especially with regard to chemical classes that have achieved popular attention. Examples of the effect of these are as follows:
resveratrol, a single compound defined clearly by this common name, but that can be confused, popularly, with its cis-isomer,
omega-3 fatty acids, a reasonably well-defined class of chemical structures that is nevertheless broad as a result of its formal definition, and
polyphenols, a fairly broad structural class with a formal definition, but where mistranslations and general misuse of the term relative to the formal definition has resulted in serious errors of usage, and so ambiguity in the relationship between structure and activity (SAR).
The rapid pace at which meanings can change on the internet, in particular for chemical compounds with perceived health benefits, ascribed rightly or wrongly, complicate the monosemy of nomenclature (and so access to SAR understanding). Specific examples appear in the Polyphenol article, where varying internet and common-use definitions conflict with any accepted chemical nomenclature connecting polyphenol structure and bioactivity).
History
The nomenclature of alchemy is descriptive, but does not effectively represent the functions mentioned above. Opinions differ about whether this was deliberate on the part of the early practitioners of alchemy or whether it was a consequence of the particular (and often esoteric) theories according to which they worked. While both explanations are probably valid to some extent, it is remarkable that the first "modern" system of chemical nomenclature appeared at the same time as the distinction (by Lavoisier) between elements and compounds, during the late eighteenth century.
The French chemist Louis-Bernard Guyton de Morveau published his recommendations in 1782, hoping that his "constant method of denomination" would "help the intelligence and relieve the memory". The system was refined in collaboration with Berthollet, de Fourcroy and Lavoisier, and promoted by the latter in a textbook that would survive long after his death by guillotine in 1794. The project was also endorsed by Jöns Jakob Berzelius, who adapted the ideas for the German-speaking world.
The recommendations of Guyton were only for what would be known now as inorganic compounds. With the massive expansion of organic chemistry during the mid-nineteenth century and the greater understanding of the structure of organic compounds, the need for a less ad hoc system of nomenclature was felt just as the theoretical basis became available to make this possible. An international conference was convened in Geneva in 1892 by the national chemical societies, from which the first widely accepted proposals for standardization developed.
A commission was established in 1913 by the Council of the International Association of Chemical Societies, but its work was interrupted by World War I. After the war, the task passed to the newly formed International Union of Pure and Applied Chemistry, which first appointed commissions for organic, inorganic, and biochemical nomenclature in 1921 and continues to do so to this day.
Types of nomenclature
Nomenclature has been developed for both organic and inorganic chemistry. There are also designations having to do with structuresee Descriptor (chemistry).
Organic chemistry
Additive name
Conjunctive name
Functional class name, also known as a radicofunctional name
Fusion name
Hantzsch–Widman nomenclature
Multiplicative name
Replacement name
Substitutive name
Subtractive name
Inorganic chemistry
Compositional nomenclature
Type-I ionic binary compounds
For type-I ionic binary compounds, the cation (a metal in most cases) is named first, and the anion (usually a nonmetal) is named second. The cation retains its elemental name (e.g., iron or zinc), but the suffix of the nonmetal changes to -ide. For example, the compound is made of cations and anions; thus, it is called lithium bromide. The compound , which is composed of cations and anions, is referred to as barium oxide.
The oxidation state of each element is unambiguous. When these ions combine into a type-I binary compound, their equal-but-opposite charges are neutralized, so the compound's net charge is zero.
Type-II ionic binary compounds
Type-II ionic binary compounds are those in which the cation does not have just one oxidation state. This is common among transition metals. To name these compounds, one must determine the charge of the cation and then render the name as would be done with Type-I ionic compounds, except that a Roman numeral (indicating the charge of the cation) is written in parentheses next to the cation name (this is sometimes referred to as Stock nomenclature). For example, for the compound , the cation, iron, can occur as and . In order for the compound to have a net charge of zero, the cation must be so that the three anions can be balanced (3+ and 3− balance to 0). Thus, this compound is termed iron(III) chloride. Another example could be the compound . Because the anion has a subscript of 2 in the formula (giving a 4− charge), the compound must be balanced with a 4+ charge on the cation (lead can form cations with a 4+ or a 2+ charge). Thus, the compound is made of one cation to every two anions, the compound is balanced, and its name is written as lead(IV) sulfide.
An older system – relying on Latin names for the elements – is also sometimes used to name Type-II ionic binary compounds. In this system, the metal (instead of a Roman numeral next to it) has a suffix "-ic" or "-ous" added to it to indicate its oxidation state ("-ous" for lower, "-ic" for higher). For example, the compound contains the cation (which balances out with the anion). Since this oxidation state is lower than the other possibility, this compound is sometimes called ferrous oxide. For the compound, , the tin ion is (balancing out the 4− charge on the two anions), and because this is a higher oxidation state than the alternative, this compound is termed stannic oxide.
Some ionic compounds contain polyatomic ions, which are charged entities containing two or more covalently bonded types of atoms. It is important to know the names of common polyatomic ions; these include:
ammonium
nitrite
nitrate
sulfite
sulfate
hydrogen sulfate (bisulfate)
hydroxide
cyanide
phosphate
hydrogen phosphate
dihydrogen phosphate
carbonate
hydrogen carbonate (bicarbonate)
hypochlorite
chlorite
chlorate
perchlorate
acetate
permanganate
dichromate
chromate
peroxide
superoxide
oxalate
hydrogen oxalate
The formula denotes that the cation is sodium, or , and that the anion is the sulfite ion. Therefore, this compound is named sodium sulfite. If the given formula is , it can be seen that is the hydroxide ion. Since the charge on the calcium ion is 2+, it makes sense there must be two ions to balance the charge. Therefore, the name of the compound is calcium hydroxide. If one is asked to write the formula for copper(I) chromate, the Roman numeral indicates that copper ion is and one can identify that the compound contains the chromate ion. Two of the 1+ copper ions are needed to balance the charge of one 2− chromate ion, so the formula is .
Type-III binary compounds
Type-III binary compounds are bonded covalently. Covalent bonding occurs between nonmetal elements. Compounds bonded covalently are also known as molecules. For the compound, the first element is named first and with its full elemental name. The second element is named as if it were an anion (base name of the element + -ide suffix). Then, prefixes are used to indicate the numbers of each atom present: these prefixes are mono- (one), di- (two), tri- (three), tetra- (four), penta- (five), hexa- (six), hepta- (seven), octa- (eight), nona- (nine), and deca- (ten). The prefix mono- is never used with the first element. Thus, is termed nitrogen trichloride, is termed boron trifluoride, and is termed diphosphorus pentoxide (although the a of the prefix penta- should actually not be omitted before a vowel: the IUPAC Red Book 2005 page 69 states, "The final vowels of multiplicative prefixes should not be elided (although "monoxide", rather than "monooxide", is an allowed exception because of general usage).").
Carbon dioxide is written ; sulfur tetrafluoride is written . A few compounds, however, have common names that prevail. , for example, is usually termed water rather than dihydrogen monoxide, and is preferentially termed ammonia rather than nitrogen trihydride.
Substitutive nomenclature
This naming method generally follows established IUPAC organic nomenclature. Hydrides of the main group elements (groups 13–17) are given the base name ending with -ane, e.g. borane, oxidane, phosphane (Although the name phosphine is also in common use, it is not recommended by IUPAC). The compound would thus be named substitutively as trichlorophosphane (with chlorine "substituting"). However, not all such names (or stems) are derived from the element name. For example, is termed "azane".
Additive nomenclature
This method of naming has been developed principally for coordination compounds although it can be applied more widely. An example of its application is , pentaamminechloridocobalt(III) chloride.
Ligands, too, have a special naming convention. Whereas chloride becomes the prefix chloro- in substitutive naming, for a ligand it becomes chlorido-.
See also
Descriptor (chemistry)
International Chemical Identifier
IUPAC nomenclature for organic chemical transformations
IUPAC nomenclature of inorganic chemistry 2005
IUPAC nomenclature of organic chemistry
IUPAC numerical multiplier
List of chemical compounds with unusual names
Preferred IUPAC name
References
External links
Interactive IUPAC Compendium of Chemical Terminology (interactive "Gold Book")
IUPAC Nomenclature Books Series (list of all IUPAC nomenclature books, and means of accessing them)
IUPAC Compendium of Chemical Terminology ("Gold Book") (archived 2005)
Quantities, Units and Symbols in Physical Chemistry ("Green Book")
IUPAC Nomenclature of Organic Chemistry ("Blue Book")
Nomenclature of Inorganic Chemistry IUPAC Recommendations 2005 ("Red Book")
IUPAC Recommendations on Organic & Biochemical Nomenclature, Symbols, Terminology, etc. (includes IUBMB Recommendations for biochemistry)
chemicalize.org A free web site/service that extracts IUPAC names from web pages and annotates a "chemicalized" version with structure images. Structures from annotated pages can also be searched.
ChemAxon Name <> Structure – IUPAC (& traditional) name to structure and structure to IUPAC name software. As used at chemicalize.org
ACD/Name – Generates IUPAC, INDEX (CAS), InChi, Smiles, etc. for drawn structures in 10 languages and translates names to structures. Also available as batch tool and for Pipeline Pilot. Part of I-Lab 2.0 | 0.78891 | 0.996535 | 0.786177 |
Clinical chemistry | Clinical chemistry (also known as chemical pathology, clinical biochemistry or medical biochemistry) is a division in medical laboratory sciences focusing on qualitative tests of important compounds, referred to as analytes or markers, in bodily fluids and tissues using analytical techniques and specialized instruments. This interdisciplinary field includes knowledge from medicine, biology, chemistry, biomedical engineering, informatics, and an applied form of biochemistry (not to be confused with medicinal chemistry, which involves basic research for drug development).
The discipline originated in the late 19th century with the use of simple chemical reaction tests for various components of blood and urine. Many decades later, clinical chemists use automated analyzers in many clinical laboratories. These instruments perform experimental techniques ranging from pipetting specimens and specimen labelling to advanced measurement techniques such as spectrometry, chromatography, photometry, potentiometry, etc. These instruments provide different results that help identify uncommon analytes, changes in light and electronic voltage properties of naturally-occurring analytes such as enzymes, ions, electrolytes, and their concentrations, all of which are important for diagnosing diseases.
Blood and urine are the most common test specimens clinical chemists or medical laboratory scientists collect for clinical routine tests, with a main focus on serum and plasma in blood. There are now many blood tests and clinical urine tests with extensive diagnostic capabilities. Some clinical tests require clinical chemists to process the specimen before testing. Clinical chemists and medical laboratory scientists serve as the interface between the laboratory side and the clinical practice, providing suggestions to physicians on which test panel to order and interpret any irregularities in test results that reflect on the patient's health status and organ system functionality. This allows healthcare providers to make more accurate evaluation of a patient's health and to diagnose disease, predicting the progression of a disease (prognosis), screening, and monitoring the treatment's efficiency in a timely manner. The type of test required dictates what type of sample is used.
Common Analytes
Some common analytes that clinical chemistry tests analyze include:
Electrolytes
Sodium
Potassium
Chloride
Bicarbonate
Renal (kidney) function tests
Creatinine
Blood urea nitrogen
Liver function tests
Total protein (serum)
Albumin
Globulins
A/G ratio (albumin-globulin)
Protein electrophoresis
Urine protein
Bilirubin; direct; indirect; total
Aspartate transaminase (AST)
Alanine transaminase (ALT)
Gamma-glutamyl transpeptidase (GGT)
Alkaline phosphatase (ALP)
Cardiac markers
H-FABP
Troponin
Myoglobin
CK-MB
B-type natriuretic peptide (BNP)
Minerals
Calcium
Magnesium
Phosphate
Potassium
Blood disorders
Iron
Transferrin
TIBC
Vitamin B12
Vitamin D
Folic acid
Miscellaneous
Glucose
C-reactive protein
Glycated hemoglobin (HbA1c)
Uric acid
Arterial blood gases ([H+], PCO2, PO2)
Adrenocorticotropic hormone (ACTH)
Toxicological screening and forensic toxicology (drugs and toxins)
Neuron-specific enolase (NSE)
fecal occult blood test (FOBT)
Panel tests
A physician may order many laboratory tests on one specimen, referred to as a test panel, when a single test cannot provide sufficient information to make a swift and accurate diagnosis and treatment plan. A test panel is a group of many tests a clinical chemists do on one sample to look for changes in many analytes that may be indicative of specific medical concerns or the health status of an organ system. Thus, panel tests provide a more extensive evaluation of a patient's health, have higher predictive values for confirming or disproving a disease, and are quick and cost-effective.
Metabolic Panel
A Metabolic Panel (MP) is a routine group of blood tests commonly used for health screenings, disease detection, and monitoring vital signs of hospitalized patients with specific medical conditions. MP panel analyzes common analytes in the blood to assess the functions of the kidneys and liver, as well as electrolyte and acid-base balances. There are two types of MPs - Basic Metabolic Panel (BMP) or Comprehensive Metabolic Panel (CMP).
Basic Metabolic Panel
BMP is a panel of tests that measures eight analytes in the blood's fluid portion (plasma). The results of the BMP provide valuable information about a patient's kidney function, blood sugar level, electrolyte levels, and the acid-base balance. Abnormal changes in one or more of these analytes can be a sign of serious health issues:
Sodium, Potassium, Chloride, and Carbon Dioxide: they are electrolytes that have electrical charges that manage the body’s water level, acid-base balance in the blood, and kidney function.
Calcium: This charged electrolyte is essential for the proper functions of nerve, muscle, blood clotting, and bone health. Changes in the calcium level can be signs of bone disease, muscle cramps/ spasms, thyroid disease, or other conditions.
Glucose: This measures the blood sugar levels, which is a crucial energy for your body and brain. High glucose levels can be a sign of diabetes or insulin resistance.
Urea and Creatinine: These are waste products that the kidney filters out from blood. Urea measurements are helpful in detecting and treating kidney failure and related metabolic disorders, whereas creatinine measurements give information on kidney’s health, tracking renal dialysis treatment, and monitor hospitalized patients that are on diuretics.
Comprehensive Metabolic Panel
Comprehensive metabolic panel (CMP) - 14 tests - above BMP plus total protein, albumin, alkaline phosphatase (ALP), alanine amino transferase (ALT), aspartate amino transferase (AST), bilirubin.
Specimen Processing
For blood tests, clinical chemists must process the specimen to obtain plasma and serum before testing for targeted analytes. This is most easily done by centrifugation, which packs the denser blood cells and platelets to the bottom of the centrifuge tube, leaving the liquid serum fraction resting above the packed cells. This initial step before analysis has recently been included in instruments that operate on the "integrated system" principle. Plasma is obtained by centrifugation before clotting occurs.
Instruments
Most current medical laboratories now have highly automated analyzers to accommodate the high workload typical of a hospital laboratory, and accept samples for up to about 700 different kinds of tests. Even the largest of laboratories rarely do all these tests themselves, and some must be referred to other labs. Tests performed are closely monitored and quality controlled.
Specialties
The large array of tests can be categorised into sub-specialities of:
General or routine chemistry – commonly ordered blood chemistries (e.g., liver and kidney function tests).
Special chemistry – elaborate techniques such as electrophoresis, and manual testing methods.
Clinical endocrinology – the study of hormones, and diagnosis of endocrine disorders.
Toxicology – the study of drugs of abuse and other chemicals.
Therapeutic Drug Monitoring – measurement of therapeutic medication levels to optimize dosage.
Urinalysis – chemical analysis of urine for a wide array of diseases, along with other fluids such as CSF and effusions
Fecal analysis – mostly for detection of gastrointestinal disorders.
See also
Reference ranges for common blood tests
Medical technologist
Clinical Biochemistry (journal)
Notes and references
Bibliography
External links
American Association of Clinical Chemistry
Association for Mass Spectrometry: Applications to the Clinical Lab (MSACL)
Clinical pathology
Pathology
Laboratory healthcare occupations | 0.794846 | 0.989049 | 0.786142 |
Chemical structure | A chemical structure of a molecule is a spatial arrangement of its atoms and their chemical bonds. Its determination includes a chemist's specifying the molecular geometry and, when feasible and necessary, the electronic structure of the target molecule or other solid. Molecular geometry refers to the spatial arrangement of atoms in a molecule and the chemical bonds that hold the atoms together and can be represented using structural formulae and by molecular models; complete electronic structure descriptions include specifying the occupation of a molecule's molecular orbitals. Structure determination can be applied to a range of targets from very simple molecules (e.g., diatomic oxygen or nitrogen) to very complex ones (e.g., such as protein or DNA).
Background
Theories of chemical structure were first developed by August Kekulé, Archibald Scott Couper, and Aleksandr Butlerov, among others, from about 1858. These theories were first to state that chemical compounds are not a random cluster of atoms and functional groups, but rather had a definite order defined by the valency of the atoms composing the molecule, giving the molecules a three dimensional structure that could be determined or solved.
Concerning chemical structure, one has to distinguish between pure connectivity of the atoms within a molecule (chemical constitution), a description of a three-dimensional arrangement (molecular configuration, includes e.g. information on chirality) and the precise determination of bond lengths, angles and torsion angles, i.e. a full representation of the (relative) atomic coordinates.
In determining structures of chemical compounds, one generally aims to obtain, first and minimally, the pattern and degree of bonding between all atoms in the molecule; when possible, one seeks the three dimensional spatial coordinates of the atoms in the molecule (or other solid).
Structural elucidation
The methods by which one can determine the structure of a molecule is called structural elucidation. These methods include:
concerning only connectivity of the atoms: spectroscopies such as nuclear magnetic resonance (proton and carbon-13 NMR), various methods of mass spectrometry (to give overall molecular mass, as well as fragment masses).Techniques such as absorption spectroscopy and the vibrational spectroscopies, infrared and Raman, provide, respectively, important supporting information about the numbers and adjacencies of multiple bonds, and about the types of functional groups (whose internal bonding gives vibrational signatures); further inferential studies that give insight into the contributing electronic structure of molecules include cyclic voltammetry and X-ray photoelectron spectroscopy.
concerning precise metric three-dimensional information: can be obtained for gases by gas electron diffraction and microwave (rotational) spectroscopy (and other rotationally resolved spectroscopy) and for the crystalline solid state by X-ray crystallography or neutron diffraction. These technique can produce three-dimensional models at atomic-scale resolution, typically to a precision of 0.001 Å for distances and 0.1° for angles (in unusual cases even better).
Additional sources of information are: When a molecule has an unpaired electron spin in a functional group of its structure, ENDOR and electron-spin resonance spectroscopes may also be performed. These latter techniques become all the more important when the molecules contain metal atoms, and when the crystals required by crystallography or the specific atom types that are required by NMR are unavailable to exploit in the structure determination. Finally, more specialized methods such as electron microscopy are also applicable in some cases.
See also
Structural chemistry
Chemical structure diagram
Crystallographic database
MOGADOC A data base for experimental structures determined in the gas phase
Pauli exclusion principle
Chemical graph generator
References
Further reading
Analytical chemistry | 0.79595 | 0.987319 | 0.785857 |
Theoretical ecology | Theoretical ecology is the scientific discipline devoted to the study of ecological systems using theoretical methods such as simple conceptual models, mathematical models, computational simulations, and advanced data analysis. Effective models improve understanding of the natural world by revealing how the dynamics of species populations are often based on fundamental biological conditions and processes. Further, the field aims to unify a diverse range of empirical observations by assuming that common, mechanistic processes generate observable phenomena across species and ecological environments. Based on biologically realistic assumptions, theoretical ecologists are able to uncover novel, non-intuitive insights about natural processes. Theoretical results are often verified by empirical and observational studies, revealing the power of theoretical methods in both predicting and understanding the noisy, diverse biological world.
The field is broad and includes foundations in applied mathematics, computer science, biology, statistical physics, genetics, chemistry, evolution, and conservation biology. Theoretical ecology aims to explain a diverse range of phenomena in the life sciences, such as population growth and dynamics, fisheries, competition, evolutionary theory, epidemiology, animal behavior and group dynamics, food webs, ecosystems, spatial ecology, and the effects of climate change.
Theoretical ecology has further benefited from the advent of fast computing power, allowing the analysis and visualization of large-scale computational simulations of ecological phenomena. Importantly, these modern tools provide quantitative predictions about the effects of human induced environmental change on a diverse variety of ecological phenomena, such as: species invasions, climate change, the effect of fishing and hunting on food network stability, and the global carbon cycle.
Modelling approaches
As in most other sciences, mathematical models form the foundation of modern ecological theory.
Phenomenological models: distill the functional and distributional shapes from observed patterns in the data, or researchers decide on functions and distribution that are flexible enough to match the patterns they or others (field or experimental ecologists) have found in the field or through experimentation.
Mechanistic models: model the underlying processes directly, with functions and distributions that are based on theoretical reasoning about ecological processes of interest.
Ecological models can be deterministic or stochastic.
Deterministic models always evolve in the same way from a given starting point. They represent the average, expected behavior of a system, but lack random variation. Many system dynamics models are deterministic.
Stochastic models allow for the direct modeling of the random perturbations that underlie real world ecological systems. Markov chain models are stochastic.
Species can be modelled in continuous or discrete time.
Continuous time is modelled using differential equations.
Discrete time is modelled using difference equations. These model ecological processes that can be described as occurring over discrete time steps. Matrix algebra is often used to investigate the evolution of age-structured or stage-structured populations. The Leslie matrix, for example, mathematically represents the discrete time change of an age structured population.
Models are often used to describe real ecological reproduction processes of single or multiple species.
These can be modelled using stochastic branching processes. Examples are the dynamics of interacting populations (predation competition and mutualism), which, depending on the species of interest, may best be modeled over either continuous or discrete time. Other examples of such models may be found in the field of mathematical epidemiology where the dynamic relationships that are to be modeled are host–pathogen interactions.
Bifurcation theory is used to illustrate how small changes in parameter values can give rise to dramatically different long run outcomes, a mathematical fact that may be used to explain drastic ecological differences that come about in qualitatively very similar systems. Logistic maps are polynomial mappings, and are often cited as providing archetypal examples of how chaotic behaviour can arise from very simple non-linear dynamical equations. The maps were popularized in a seminal 1976 paper by the theoretical ecologist Robert May. The difference equation is intended to capture the two effects of reproduction and starvation.
In 1930, R.A. Fisher published his classic The Genetical Theory of Natural Selection, which introduced the idea that frequency-dependent fitness brings a strategic aspect to evolution, where the payoffs to a particular organism, arising from the interplay of all of the relevant organisms, are the number of this organism' s viable offspring. In 1961, Richard Lewontin applied game theory to evolutionary biology in his Evolution and the Theory of Games,
followed closely by John Maynard Smith, who in his seminal 1972 paper, “Game Theory and the Evolution of Fighting", defined the concept of the evolutionarily stable strategy.
Because ecological systems are typically nonlinear, they often cannot be solved analytically and in order to obtain sensible results, nonlinear, stochastic and computational techniques must be used. One class of computational models that is becoming increasingly popular are the agent-based models. These models can simulate the actions and interactions of multiple, heterogeneous, organisms where more traditional, analytical techniques are inadequate. Applied theoretical ecology yields results which are used in the real world. For example, optimal harvesting theory draws on optimization techniques developed in economics, computer science and operations research, and is widely used in fisheries.
Population ecology
Population ecology is a sub-field of ecology that deals with the dynamics of species populations and how these populations interact with the environment. It is the study of how the population sizes of species living together in groups change over time and space, and was one of the first aspects of ecology to be studied and modelled mathematically.
Exponential growth
The most basic way of modeling population dynamics is to assume that the rate of growth of a population depends only upon the population size at that time and the per capita growth rate of the organism. In other words, if the number of individuals in a population at a time t, is N(t), then the rate of population growth is given by:
where r is the per capita growth rate, or the intrinsic growth rate of the organism. It can also be described as r = b-d, where b and d are the per capita time-invariant birth and death rates, respectively. This first order linear differential equation can be solved to yield the solution
,
a trajectory known as Malthusian growth, after Thomas Malthus, who first described its dynamics in 1798. A population experiencing Malthusian growth follows an exponential curve, where N(0) is the initial population size. The population grows when r > 0, and declines when r < 0. The model is most applicable in cases where a few organisms have begun a colony and are rapidly growing without any limitations or restrictions impeding their growth (e.g. bacteria inoculated in rich media).
Logistic growth
The exponential growth model makes a number of assumptions, many of which often do not hold. For example, many factors affect the intrinsic growth rate and is often not time-invariant. A simple modification of the exponential growth is to assume that the intrinsic growth rate varies with population size. This is reasonable: the larger the population size, the fewer resources available, which can result in a lower birth rate and higher death rate. Hence, we can replace the time-invariant r with r’(t) = (b –a*N(t)) – (d + c*N(t)), where a and c are constants that modulate birth and death rates in a population dependent manner (e.g. intraspecific competition). Both a and c will depend on other environmental factors which, we can for now, assume to be constant in this approximated model. The differential equation is now:
This can be rewritten as:
where r = b-d and K = (b-d)/(a+c).
The biological significance of K becomes apparent when stabilities of the equilibria of the system are considered. The constant K is the carrying capacity of the population. The equilibria of the system are N = 0 and N = K. If the system is linearized, it can be seen that N = 0 is an unstable equilibrium while K is a stable equilibrium.
Structured population growth
Another assumption of the exponential growth model is that all individuals within a population are identical and have the same probabilities of surviving and of reproducing. This is not a valid assumption for species with complex life histories. The exponential growth model can be modified to account for this, by tracking the number of individuals in different age classes (e.g. one-, two-, and three-year-olds) or different stage classes (juveniles, sub-adults, and adults) separately, and allowing individuals in each group to have their own survival and reproduction rates.
The general form of this model is
where Nt is a vector of the number of individuals in each class at time t and L is a matrix that contains the survival probability and fecundity for each class. The matrix L is referred to as the Leslie matrix for age-structured models, and as the Lefkovitch matrix for stage-structured models.
If parameter values in L are estimated from demographic data on a specific population, a structured model can then be used to predict whether this population is expected to grow or decline in the long-term, and what the expected age distribution within the population will be. This has been done for a number of species including loggerhead sea turtles and right whales.
Community ecology
An ecological community is a group of trophically similar, sympatric species that actually or potentially compete in a local area for the same or similar resources. Interactions between these species form the first steps in analyzing more complex dynamics of ecosystems. These interactions shape the distribution and dynamics of species. Of these interactions, predation is one of the most widespread population activities.
Taken in its most general sense, predation comprises predator–prey, host–pathogen, and host–parasitoid interactions.
Predator–prey interaction
Predator–prey interactions exhibit natural oscillations in the populations of both predator and the prey. In 1925, the American mathematician Alfred J. Lotka developed simple equations for predator–prey interactions in his book on biomathematics. The following year, the Italian mathematician Vito Volterra, made a statistical analysis of fish catches in the Adriatic and independently developed the same equations. It is one of the earliest and most recognised ecological models, known as the Lotka-Volterra model:
where N is the prey and P is the predator population sizes, r is the rate for prey growth, taken to be exponential in the absence of any predators, α is the prey mortality rate for per-capita predation (also called ‘attack rate’), c is the efficiency of conversion from prey to predator, and d is the exponential death rate for predators in the absence of any prey.
Volterra originally used the model to explain fluctuations in fish and shark populations after fishing was curtailed during the First World War. However, the equations have subsequently been applied more generally. Other examples of these models include the Lotka-Volterra model of the snowshoe hare and Canadian lynx in North America, any infectious disease modeling such as the recent outbreak of SARS
and biological control of California red scale by the introduction of its parasitoid, Aphytis melinus
.
A credible, simple alternative to the Lotka-Volterra predator–prey model and their common prey dependent generalizations is the ratio dependent or Arditi-Ginzburg model. The two are the extremes of the spectrum of predator interference models. According to the authors of the alternative view, the data show that true interactions in nature are so far from the Lotka–Volterra extreme on the interference spectrum that the model can simply be discounted as wrong. They are much closer to the ratio-dependent extreme, so if a simple model is needed one can use the Arditi–Ginzburg model as the first approximation.
Host–pathogen interaction
The second interaction, that of host and pathogen, differs from predator–prey interactions in that pathogens are much smaller, have much faster generation times, and require a host to reproduce. Therefore, only the host population is tracked in host–pathogen models. Compartmental models that categorize host population into groups such as susceptible, infected, and recovered (SIR) are commonly used.
Host–parasitoid interaction
The third interaction, that of host and parasitoid, can be analyzed by the Nicholson–Bailey model, which differs from Lotka-Volterra and SIR models in that it is discrete in time. This model, like that of Lotka-Volterra, tracks both populations explicitly. Typically, in its general form, it states:
where f(Nt, Pt) describes the probability of infection (typically, Poisson distribution), λ is the per-capita growth rate of hosts in the absence of parasitoids, and c is the conversion efficiency, as in the Lotka-Volterra model.
Competition and mutualism
In studies of the populations of two species, the Lotka-Volterra system of equations has been extensively used to describe dynamics of behavior between two species, N1 and N2. Examples include relations between D. discoiderum and E. coli,
as well as theoretical analysis of the behavior of the system.
The r coefficients give a “base” growth rate to each species, while K coefficients correspond to the carrying capacity. What can really change the dynamics of a system, however are the α terms. These describe the nature of the relationship between the two species. When α12 is negative, it means that N2 has a negative effect on N1, by competing with it, preying on it, or any number of other possibilities. When α12 is positive, however, it means that N2 has a positive effect on N1, through some kind of mutualistic interaction between the two.
When both α12 and α21 are negative, the relationship is described as competitive. In this case, each species detracts from the other, potentially over competition for scarce resources.
When both α12 and α21 are positive, the relationship becomes one of mutualism. In this case, each species provides a benefit to the other, such that the presence of one aids the population growth of the other.
See Competitive Lotka–Volterra equations for further extensions of this model.
Neutral theory
Unified neutral theory is a hypothesis proposed by Stephen P. Hubbell in 2001. The hypothesis aims to explain the diversity and relative abundance of species in ecological communities, although like other neutral theories in ecology, Hubbell's hypothesis assumes that the differences between members of an ecological community of trophically similar species are "neutral," or irrelevant to their success. Neutrality means that at a given trophic level in a food web, species are equivalent in birth rates, death rates, dispersal rates and speciation rates, when measured on a per-capita basis. This implies that biodiversity arises at random, as each species follows a random walk. This can be considered a null hypothesis to niche theory. The hypothesis has sparked controversy, and some authors consider it a more complex version of other null models that fit the data better.
Under unified neutral theory, complex ecological interactions are permitted among individuals of an ecological community (such as competition and cooperation), providing all individuals obey the same rules. Asymmetric phenomena such as parasitism and predation are ruled out by the terms of reference; but cooperative strategies such as swarming, and negative interaction such as competing for limited food or light are allowed, so long as all individuals behave the same way. The theory makes predictions that have implications for the management of biodiversity, especially the management of rare species. It predicts the existence of a fundamental biodiversity constant, conventionally written θ, that appears to govern species richness on a wide variety of spatial and temporal scales.
Hubbell built on earlier neutral concepts, including MacArthur & Wilson's theory of island biogeography and Gould's concepts of symmetry and null models.
Spatial ecology
Biogeography
Biogeography is the study of the distribution of species in space and time. It aims to reveal where organisms live, at what abundance, and why they are (or are not) found in a certain geographical area.
Biogeography is most keenly observed on islands, which has led to the development of the subdiscipline of island biogeography. These habitats are often a more manageable areas of study because they are more condensed than larger ecosystems on the mainland. In 1967, Robert MacArthur and E.O. Wilson published The Theory of Island Biogeography. This showed that the species richness in an area could be predicted in terms of factors such as habitat area, immigration rate and extinction rate. The theory is considered one of the fundamentals of ecological theory. The application of island biogeography theory to habitat fragments spurred the development of the fields of conservation biology and landscape ecology.
r/K-selection theory
A population ecology concept is r/K selection theory, one of the first predictive models in ecology used to explain life-history evolution. The premise behind the r/K selection model is that natural selection pressures change according to population density. For example, when an island is first colonized, density of individuals is low. The initial increase in population size is not limited by competition, leaving an abundance of available resources for rapid population growth. These early phases of population growth experience density-independent forces of natural selection, which is called r-selection. As the population becomes more crowded, it approaches the island's carrying capacity, thus forcing individuals to compete more heavily for fewer available resources. Under crowded conditions, the population experiences density-dependent forces of natural selection, called K-selection.
Niche theory
Metapopulations
Spatial analysis of ecological systems often reveals that assumptions that are valid for spatially homogenous populations – and indeed, intuitive – may no longer be valid when migratory subpopulations moving from one patch to another are considered. In a simple one-species formulation, a subpopulation may occupy a patch, move from one patch to another empty patch, or die out leaving an empty patch behind. In such a case, the proportion of occupied patches may be represented as
where m is the rate of colonization, and e is the rate of extinction. In this model, if e < m, the steady state value of p is 1 – (e/m) while in the other case, all the patches will eventually be left empty. This model may be made more complex by addition of another species in several different ways, including but not limited to game theoretic approaches, predator–prey interactions, etc. We will consider here an extension of the previous one-species system for simplicity. Let us denote the proportion of patches occupied by the first population as p1, and that by the second as p2. Then,
In this case, if e is too high, p1 and p2 will be zero at steady state. However, when the rate of extinction is moderate, p1 and p2 can stably coexist. The steady state value of p2 is given by
(p*1 may be inferred by symmetry).
If e is zero, the dynamics of the system favor the species that is better at colonizing (i.e. has the higher m value). This leads to a very important result in theoretical ecology known as the Intermediate Disturbance Hypothesis, where the biodiversity (the number of species that coexist in the population) is maximized when the disturbance (of which e is a proxy here) is not too high or too low, but at intermediate levels.
The form of the differential equations used in this simplistic modelling approach can be modified. For example:
Colonization may be dependent on p linearly (m*(1-p)) as opposed to the non-linear m*p*(1-p) regime described above. This mode of replication of a species is called the “rain of propagules”, where there is an abundance of new individuals entering the population at every generation. In such a scenario, the steady state where the population is zero is usually unstable.
Extinction may depend non-linearly on p (e*p*(1-p)) as opposed to the linear (e*p) regime described above. This is referred to as the “rescue effect” and it is again harder to drive a population extinct under this regime.
The model can also be extended to combinations of the four possible linear or non-linear dependencies of colonization and extinction on p are described in more detail in.
Ecosystem ecology
Introducing new elements, whether biotic or abiotic, into ecosystems can be disruptive. In some cases, it leads to ecological collapse, trophic cascades and the death of many species within the ecosystem. The abstract notion of ecological health attempts to measure the robustness and recovery capacity for an ecosystem; i.e. how far the ecosystem is away from its steady state. Often, however, ecosystems rebound from a disruptive agent. The difference between collapse or rebound depends on the toxicity of the introduced element and the resiliency of the original ecosystem.
If ecosystems are governed primarily by stochastic processes, through which its subsequent state would be determined by both predictable and random actions, they may be more resilient to sudden change than each species individually. In the absence of a balance of nature, the species composition of ecosystems would undergo shifts that would depend on the nature of the change, but entire ecological collapse would probably be infrequent events. In 1997, Robert Ulanowicz used information theory tools to describe the structure of ecosystems, emphasizing mutual information (correlations) in studied systems. Drawing on this methodology and prior observations of complex ecosystems, Ulanowicz depicts approaches to determining the stress levels on ecosystems and predicting system reactions to defined types of alteration in their settings (such as increased or reduced energy flow), and eutrophication.
Ecopath is a free ecosystem modelling software suite, initially developed by NOAA, and widely used in fisheries management as a tool for modelling and visualising the complex relationships that exist in real world marine ecosystems.
Food webs
Food webs provide a framework within which a complex network of predator–prey interactions can be organised. A food web model is a network of food chains. Each food chain starts with a primary producer or autotroph, an organism, such as a plant, which is able to manufacture its own food. Next in the chain is an organism that feeds on the primary producer, and the chain continues in this way as a string of successive predators. The organisms in each chain are grouped into trophic levels, based on how many links they are removed from the primary producers. The length of the chain, or trophic level, is a measure of the number of species encountered as energy or nutrients move from plants to top predators. Food energy flows from one organism to the next and to the next and so on, with some energy being lost at each level. At a given trophic level there may be one species or a group of species with the same predators and prey.
In 1927, Charles Elton published an influential synthesis on the use of food webs, which resulted in them becoming a central concept in ecology. In 1966, interest in food webs increased after Robert Paine's experimental and descriptive study of intertidal shores, suggesting that food web complexity was key to maintaining species diversity and ecological stability. Many theoretical ecologists, including Sir Robert May and Stuart Pimm, were prompted by this discovery and others to examine the mathematical properties of food webs. According to their analyses, complex food webs should be less stable than simple food webs. The apparent paradox between the complexity of food webs observed in nature and the mathematical fragility of food web models is currently an area of intensive study and debate. The paradox may be due partially to conceptual differences between persistence of a food web and equilibrial stability of a food web.
Systems ecology
Systems ecology can be seen as an application of general systems theory to ecology. It takes a holistic and interdisciplinary approach to the study of ecological systems, and particularly ecosystems. Systems ecology is especially concerned with the way the functioning of ecosystems can be influenced by human interventions. Like other fields in theoretical ecology, it uses and extends concepts from thermodynamics and develops other macroscopic descriptions of complex systems. It also takes account of the energy flows through the different trophic levels in the ecological networks. Systems ecology also considers the external influence of ecological economics, which usually is not otherwise considered in ecosystem ecology. For the most part, systems ecology is a subfield of ecosystem ecology.
Ecophysiology
This is the study of how "the environment, both physical and biological, interacts with the physiology of an organism. It includes the effects of climate and nutrients on physiological processes in both plants and animals, and has a particular focus on how physiological processes scale with organism size".
Behavioral ecology
Swarm behaviour
Swarm behaviour is a collective behaviour exhibited by animals of similar size which aggregate together, perhaps milling about the same spot or perhaps migrating in some direction. Swarm behaviour is commonly exhibited by insects, but it also occurs in the flocking of birds, the schooling of fish and the herd behaviour of quadrupeds. It is a complex emergent behaviour that occurs when individual agents follow simple behavioral rules.
Recently, a number of mathematical models have been discovered which explain many aspects of the emergent behaviour. Swarm algorithms follow a Lagrangian approach or an Eulerian approach. The Eulerian approach views the swarm as a field, working with the density of the swarm and deriving mean field properties. It is a hydrodynamic approach, and can be useful for modelling the overall dynamics of large swarms.<ref>Toner J and Tu Y (1995) "Long-range order in a two-dimensional xy model: how birds fly together" Physical Revue Letters, '75 (23)(1995), 4326–4329.</ref> However, most models work with the Lagrangian approach, which is an agent-based model following the individual agents (points or particles) that make up the swarm. Individual particle models can follow information on heading and spacing that is lost in the Eulerian approach. Examples include ant colony optimization, self-propelled particles and particle swarm optimization.
On cellular levels, individual organisms also demonstrated swarm behavior. Decentralized systems are where individuals act based on their own decisions without overarching guidance. Studies have shown that individual Trichoplax adhaerens behave like self-propelled particles (SPPs) and collectively display phase transition from ordered movement to disordered movements. Previously, it was thought that the surface-to-volume ratio was what limited the animal size in the evolutionary game. Considering the collective behaviour of the individuals, it was suggested that order is another limiting factor. Central nervous systems were indicated to be vital for large multicellular animals in the evolutionary pathway.
Synchronization Photinus carolinus firefly will synchronize their shining frequencies in a collective setting. Individually, there are no apparent patterns for the flashing. In a group setting, periodicity emerges in the shining pattern. The coexistence of the synchronization and asynchronization in the flashings in the system composed of multiple fireflies could be characterized by the chimera states. Synchronization could spontaneously occur. The agent-based model has been useful in describing this unique phenomenon. The flashings of individual fireflies could be viewed as oscillators and the global coupling models were similar to the ones used in condensed matter physics.
Evolutionary ecology
The British biologist Alfred Russel Wallace is best known for independently proposing a theory of evolution due to natural selection that prompted Charles Darwin to publish his own theory. In his famous 1858 paper, Wallace proposed natural selection as a kind of feedback mechanism which keeps species and varieties adapted to their environment.
The action of this principle is exactly like that of the centrifugal governor of the steam engine, which checks and corrects any irregularities almost before they become evident; and in like manner no unbalanced deficiency in the animal kingdom can ever reach any conspicuous magnitude, because it would make itself felt at the very first step, by rendering existence difficult and extinction almost sure soon to follow.
The cybernetician and anthropologist Gregory Bateson observed in the 1970s that, though writing it only as an example, Wallace had "probably said the most powerful thing that’d been said in the 19th Century". Subsequently, the connection between natural selection and systems theory has become an area of active research.
Other theories
In contrast to previous ecological theories which considered floods to be catastrophic events, the river flood pulse concept argues that the annual flood pulse is the most important aspect and the most biologically productive feature of a river's ecosystem.Benke, A. C., Chaubey, I., Ward, G. M., & Dunn, E. L. (2000). Flood Pulse Dynamics of an Unregulated River Floodplain in the Southeastern U.S. Coastal Plain. Ecology, 2730-2741.
History
Theoretical ecology draws on pioneering work done by G. Evelyn Hutchinson and his students. Brothers H.T. Odum and E.P. Odum are generally recognised as the founders of modern theoretical ecology. Robert MacArthur brought theory to community ecology. Daniel Simberloff was the student of E.O. Wilson, with whom MacArthur collaborated on The Theory of Island Biogeography, a seminal work in the development of theoretical ecology.
Simberloff added statistical rigour to experimental ecology and was a key figure in the SLOSS debate, about whether it is preferable to protect a single large or several small reserves. This resulted in the supporters of Jared Diamond's community assembly rules defending their ideas through Neutral Model Analysis. Simberloff also played a key role in the (still ongoing) debate on the utility of corridors for connecting isolated reserves.
Stephen P. Hubbell and Michael Rosenzweig combined theoretical and practical elements into works that extended MacArthur and Wilson's Island Biogeography Theory - Hubbell with his Unified Neutral Theory of Biodiversity and Biogeography and Rosenzweig with his Species Diversity in Space and Time.
Theoretical and mathematical ecologists
A tentative distinction can be made between mathematical ecologists, ecologists who apply mathematics to ecological problems, and mathematicians who develop the mathematics itself that arises out of ecological problems.
Some notable theoretical ecologists can be found in these categories:
:Category:Mathematical ecologists
:Category:Theoretical biologists
Journals
The American Naturalist Journal of Mathematical Biology Journal of Theoretical Biology Theoretical Ecology Theoretical Population Biology Ecological ModellingSee also
Butterfly effect
Complex system biology
Ecological systems theory
Ecosystem model
Integrodifference equation – widely used to model the dispersal and growth of populations
Limiting similarity
Mathematical biology
Population dynamics
Population modeling
Quantitative ecology
Taylor's law
Theoretical biology
References
Further reading
The classic text is Theoretical Ecology: Principles and Applications, by Angela McLean and Robert May. The 2007 edition is published by the Oxford University Press. .
Bolker BM (2008) Ecological Models and Data in R Princeton University Press. .
Case TJ (2000) An illustrated guide to theoretical ecology Oxford University Press. .
Caswell H (2000) Matrix Population Models: Construction, Analysis, and Interpretation'', Sinauer, 2nd Ed. .
Edelstein-Keshet L (2005) Mathematical Models in Biology Society for Industrial and Applied Mathematics. .
Gotelli NJ (2008) A Primer of Ecology Sinauer Associates, 4th Ed. .
Gotelli NJ & A Ellison (2005) A Primer Of Ecological Statistics Sinauer Associates Publishers. .
Hastings A (1996) Population Biology: Concepts and Models Springer. .
Hilborn R & M Clark (1997) The Ecological Detective: Confronting Models with Data Princeton University Press.
Kokko H (2007) Modelling for field biologists and other interesting people Cambridge University Press. .
Kot M (2001) Elements of Mathematical Ecology Cambridge University Press. .
Murray JD (2002) Mathematical Biology, Volume 1 Springer, 3rd Ed. .
Murray JD (2003) Mathematical Biology, Volume 2 Springer, 3rd Ed. .
Pastor J (2008) Mathematical Ecology of Populations and Ecosystems Wiley-Blackwell. .
Roughgarden J (1998) Primer of Ecological Theory Prentice Hall. .
Ulanowicz R (1997) Ecology: The Ascendant Perspective Columbia University Press.
Ecology | 0.810218 | 0.969862 | 0.7858 |
Agricultural chemistry | Agricultural chemistry is the chemistry, especially organic chemistry and biochemistry, as they relate to agriculture. Agricultural chemistry embraces the structures and chemical reactions relevant in the production, protection, and use of crops and livestock. Its applied science and technology aspects are directed towards increasing yields and improving quality, which comes with multiple advantages and disadvantages.
Agricultural and environmental chemistry
This aspect of agricultural chemistry deals with the role of molecular chemistry in agriculture as well as the negative consequences.
Plant Biochemistry
Plant biochemistry encompasses the chemical reactions that occur within plants. In principle, knowledge at a molecular level informs technologies for providing food. Particular focus is on the biochemical differences between plants and other organisms as well as the differences within the plant kingdom, such as dicotyledons vs monocotyledons, gymnosperms vs angiosperms, C2- vs C4-fixers, etc.
Pesticides
Chemical materials developed to assist in the production of food, feed, and fiber include herbicides, insecticides, fungicides, and other pesticides. Pesticides are chemicals that play an important role in increasing crop yield and mitigating crop losses. These work to keep insects and other animals away from crops to allow them to grow undisturbed, effectively regulating pests and diseases.
Disadvantages of pesticides include contamination of the ground and water (see persistent organic pollutants). They may be toxic to non-target species, including birds, fish, pollinators, as well as the farmworkers themselves.
Soil Chemistry
Agricultural chemistry often aims at preserving or increasing the fertility of soil with the goals of maintaining or improving the agricultural yield and improving the quality of the crop. Soils are analyzed with attention to the inorganic matter (minerals), which comprise most of the mass of dry soil, and organic matter, which consists of living organisms, their degradation products, humic acids and fulvic acids.
Fertilizers are a major consideration. While organic fertilizers are time-honored, their use has largely been displaced by chemicals produced from mining (phosphate rock) and the Haber-Bosch process. The use of these materials dramatically increased the rate at which crops are produced, which is able to support the growing human population. Common fertilizers include urea, ammonium sulphate, diammonium phosphate, and calcium ammonium phosphate.
Biofuels and bio-derived materials
Agricultural chemistry encompases the science and technology of producing not only edible crops, but feedstocks for fuels ("biofuels") and materials. Ethanol fuel obtained by fermentation of sugars. Biodiesel is derived from fats, both animal- and plant-derived. Methane can be recovered from manure and other ag wastes by microbial action. Lignocellulose is a promising precursor to new materials.
Biotechnology
Biocatalysis is used to produce a number of food products. More than five biilion tons of high fructose corn syrup are produced annually by the action of the immobilized enzyme glucose isomerase of corn-derived glucose. Emerging technologies are numerous, including enzymes for clarifying or debittering of fruit juices.
A variety of potentially useful chemicals are obtained by engineered plants. Bioremediation is a green route to biodegradation.
GMOs
Genetically Modified Organisms (GMO's) are plants or living things that have been altered at a genomic level by scientists to improve the organisms characteristics. These characteristics include providing new vaccines for humans, increasing nutrients supplies, and creating unique plastics. They may also be able to grow in climates that are typically not suitable for the original organism to grow in. Examples of GMO's include virus resistant tobacco and squash, delayed ripening tomatoes, and herbicide resistant soybeans.
GMO's came with an increased interest in using biotechnology to produce fertilizer and pesticides. Due to an increased market interest in biotechnology in the 1970s, there was more technology and infrastructure developed, a decreased cost, and an advance in research. Since the early 1980s, genetically-modified crops have been incorporated. Increased biotechnological work calls for the union of biology and chemistry to produce improved crops, a main reason behind this being the increasing amount of food needed to feed a growing population.
That being said, concerns with GMO's include potential antibiotic resistance from eating a GMO. There are also concerns about the long term effects on the human body since many GMO's were recently developed.
Much controversy surrounds GMO's. In the United States, all foods containing GMO's must be labeled as such.
Omics
Particularly relevant is proteomics as protein (nutrition) guides much of agriculture.
See also
Agronomy
Food science
Notes and references
Agriculture
Biochemistry | 0.80397 | 0.977362 | 0.78577 |
Mass balance | In physics, a mass balance, also called a material balance, is an application of conservation of mass to the analysis of physical systems. By accounting for material entering and leaving a system, mass flows can be identified which might have been unknown, or difficult to measure without this technique. The exact conservation law used in the analysis of the system depends on the context of the problem, but all revolve around mass conservation, i.e., that matter cannot disappear or be created spontaneously.
Therefore, mass balances are used widely in engineering and environmental analyses. For example, mass balance theory is used to design chemical reactors, to analyse alternative processes to produce chemicals, as well as to model pollution dispersion and other processes of physical systems. Mass balances form the foundation of process engineering design. Closely related and complementary analysis techniques include the population balance, energy balance and the somewhat more complex entropy balance. These techniques are required for thorough design and analysis of systems such as the refrigeration cycle.
In environmental monitoring, the term budget calculations is used to describe mass balance equations where they are used to evaluate the monitoring data (comparing input and output, etc.). In biology, the dynamic energy budget theory for metabolic organisation makes explicit use of mass and energy balance.
Introduction
The general form quoted for a mass balance is The mass that enters a system must, by conservation of mass, either leave the system or accumulate within the system.
Mathematically the mass balance for a system without a chemical reaction is as follows:
Strictly speaking the above equation holds also for systems with chemical reactions if the terms in the balance equation are taken to refer to total mass, i.e. the sum of all the chemical species of the system. In the absence of a chemical reaction the amount of any chemical species flowing in and out will be the same; this gives rise to an equation for each species present in the system. However, if this is not the case then the mass balance equation must be amended to allow for the generation or depletion (consumption) of each chemical species. Some use one term in this equation to account for chemical reactions, which will be negative for depletion and positive for generation. However, the conventional form of this equation is written to account for both a positive generation term (i.e. product of reaction) and a negative consumption term (the reactants used to produce the products). Although overall one term will account for the total balance on the system, if this balance equation is to be applied to an individual species and then the entire process, both terms are necessary. This modified equation can be used not only for reactive systems, but for population balances such as arise in particle mechanics problems. The equation is given below; note that it simplifies to the earlier equation in the case that the generation term is zero.
In the absence of a nuclear reaction the number of atoms flowing in and out must remain the same, even in the presence of a chemical reaction.
For a balance to be formed, the boundaries of the system must be clearly defined.
Mass balances can be taken over physical systems at multiple scales.
Mass balances can be simplified with the assumption of steady state, in which the accumulation term is zero.
Illustrative example
A simple example can illustrate the concept. Consider the situation in which a slurry is flowing into a settling tank to remove the solids in the tank. Solids are collected at the bottom by means of a conveyor belt partially submerged in the tank, and water exits via an overflow outlet.
In this example, there are two substances: solids and water. The water overflow outlet carries an increased concentration of water relative to solids, as compared to the slurry inlet, and the exit of the conveyor belt carries an increased concentration of solids relative to water.
Assumptions
Steady state
Non-reactive system
Analysis
Suppose that the slurry inlet composition (by mass) is 50% solid and 50% water, with a mass flow of . The tank is assumed to be operating at steady state, and as such accumulation is zero, so input and output must be equal for both the solids and water. If we know that the removal efficiency for the slurry tank is 60%, then the water outlet will contain of solids (40% times times 50% solids). If we measure the flow rate of the combined solids and water, and the water outlet is shown to be , then the amount of water exiting via the conveyor belt must be . This allows us to completely determine how the mass has been distributed in the system with only limited information and using the mass balance relations across the system boundaries. The mass balance for this system can be described in a tabular form:
Mass feedback (recycle)
Mass balances can be performed across systems which have cyclic flows. In these systems output streams are fed back into the input of a unit, often for further reprocessing.
Such systems are common in grinding circuits, where grain is crushed then sieved to only allow fine particles out of the circuit and the larger particles are returned to the roller mill (grinder). However, recycle flows are by no means restricted to solid mechanics operations; they are used in liquid and gas flows, as well. One such example is in cooling towers, where water is pumped through a tower many times, with only a small quantity of water drawn off at each pass (to prevent solids build up) until it has either evaporated or exited with the drawn off water. The mass balance for water is .
The use of the recycle aids in increasing overall conversion of input products, which is useful for low per-pass conversion processes (such as the Haber process).
Differential mass balances
A mass balance can also be taken differentially. The concept is the same as for a large mass balance, but it is performed in the context of a limiting system (for example, one can consider the limiting case in time or, more commonly, volume). A differential mass balance is used to generate differential equations that can provide an effective tool for modelling and understanding the target system.
The differential mass balance is usually solved in two steps: first, a set of governing differential equations must be obtained, and then these equations must be solved, either analytically or, for less tractable problems, numerically.
The following systems are good examples of the applications of the differential mass balance:
Ideal (stirred) batch reactor
Ideal tank reactor, also named Continuous Stirred Tank Reactor (CSTR)
Ideal Plug Flow Reactor (PFR)
Ideal batch reactor
The ideal completely mixed batch reactor is a closed system. Isothermal conditions are assumed, and mixing prevents concentration gradients as reactant concentrations decrease and product concentrations increase over time. Many chemistry textbooks implicitly assume that the studied system can be described as a batch reactor when they write about reaction kinetics and chemical equilibrium.
The mass balance for a substance A becomes
where
denotes the rate at which substance A is produced;
is the volume (which may be constant or not);
the number of moles of substance A.
In a fed-batch reactor some reactants/ingredients are added continuously or in pulses (compare making porridge by either first blending all ingredients and then letting it boil, which can be described as a batch reactor, or by first mixing only water and salt and making that boil before the other ingredients are added, which can be described as a fed-batch reactor). Mass balances for fed-batch reactors become a bit more complicated.
Reactive example
In the first example, we will show how to use a mass balance to derive a relationship between the percent excess air for the combustion of a hydrocarbon-base fuel oil and the percent oxygen in the combustion product gas. First, normal dry air contains of oxygen per mole of air, so there is one mole of in of dry air. For stoichiometric combustion, the relationships between the mass of air and the mass of each combustible element in a fuel oil are:
Considering the accuracy of typical analytical procedures, an equation for the mass of air per mass of fuel at stoichiometric combustion is:
where refer to the mass fraction of each element in the fuel oil, sulfur burning to , and refers to the air-fuel ratio in mass units.
For of fuel oil containing 86.1% C, 13.6% H, 0.2% O, and 0.1% S the stoichiometric mass of air is , so AFR = 14.56. The combustion product mass is then . At exact stoichiometry, should be absent. At 15 percent excess air, the AFR = 16.75, and the mass of the combustion product gas is , which contains of excess oxygen. The combustion gas thus contains 2.84 percent by mass. The relationships between percent excess air and % in the combustion gas are accurately expressed by quadratic equations, valid over the range 0–30 percent excess air:
In the second example, we will use the law of mass action to derive the expression for a chemical equilibrium constant.
Assume we have a closed reactor in which the following liquid phase reversible reaction occurs:
The mass balance for substance A becomes
As we have a liquid phase reaction we can (usually) assume a constant volume and since we get
or
In many textbooks this is given as the definition of reaction rate without specifying the implicit assumption that we are talking about reaction rate in a closed system with only one reaction. This is an unfortunate mistake that has confused many students over the years.
According to the law of mass action the forward reaction rate can be written as
and the backward reaction rate as
The rate at which substance A is produced is thus
and since, at equilibrium, the concentration of A is constant we get
or, rearranged
Ideal tank reactor/continuously stirred tank reactor
The continuously mixed tank reactor is an open system with an influent stream of reactants and an effluent stream of products. A lake can be regarded as a tank reactor, and lakes with long turnover times (e.g. with low flux-to-volume ratios) can for many purposes be regarded as continuously stirred (e.g. homogeneous in all respects). The mass balance then becomes
where
is the volumetric flow into the system;
is the volumetric flow out of the system;
is the concentration of A in the inflow;
is the concentration of A in the outflow.
In an open system we can never reach a chemical equilibrium. We can, however, reach a steady state where all state variables (temperature, concentrations, etc.) remain constant.
Example
Consider a bathtub in which there is some bathing salt dissolved. We now fill in more water, keeping the bottom plug in. What happens?
Since there is no reaction, and since there is no outflow . The mass balance becomes
or
Using a mass balance for total volume, however, it is evident that and that Thus we get
Note that there is no reaction and hence no reaction rate or rate law involved, and yet . We can thus draw the conclusion that reaction rate can not be defined in a general manner using . One must first write down a mass balance before a link between and the reaction rate can be found. Many textbooks, however, define reaction rate as
without mentioning that this definition implicitly assumes that the system is closed, has a constant volume and that there is only one reaction.
Ideal plug flow reactor (PFR)
The idealized plug flow reactor is an open system resembling a tube with no mixing in the direction of flow but perfect mixing perpendicular to the direction of flow, often used for systems like rivers and water pipes if the flow is turbulent. When a mass balance is made for a tube, one first considers an infinitesimal part of the tube and make a mass balance over that using the ideal tank reactor model. That mass balance is then integrated over the entire reactor volume to obtain:
In numeric solutions, e.g. when using computers, the ideal tube is often translated to a series of tank reactors, as it can be shown that a PFR is equivalent to an infinite number of stirred tanks in series, but the latter is often easier to analyze, especially at steady state.
More complex problems
In reality, reactors are often non-ideal, in which combinations of the reactor models above are used to describe the system. Not only chemical reaction rates, but also mass transfer rates may be important in the mathematical description of a system, especially in heterogeneous systems.
As the chemical reaction rate depends on temperature it is often necessary to make both an energy balance (often a heat balance rather than a full-fledged energy balance) as well as mass balances to fully describe the system. A different reactor model might be needed for the energy balance: A system that is closed with respect to mass might be open with respect to energy e.g. since heat may enter the system through conduction.
Commercial use
In industrial process plants, using the fact that the mass entering and leaving any portion of a process plant must balance, data validation and reconciliation algorithms may be employed to correct measured flows, provided that enough redundancy of flow measurements exist to permit statistical reconciliation and exclusion of detectably erroneous measurements. Since all real world measured values contain inherent error, the reconciled measurements provide a better basis than the measured values do for financial reporting, optimization, and regulatory reporting. Software packages exist to make this commercially feasible on a daily basis.
See also
Bioreactor
Chemical engineering
Continuity equation
Dilution (equation)
Energy accounting
Glacier mass balance
Mass flux
Material flow analysis
Material balance planning
Fluid mechanics
References
External links
Material Balance Calculations
Material Balance Fundamentals
The Material Balance for Chemical Reactors
Material and energy balance
Heat and material balance method of process control for petrochemical plants and oil refineries, United States Patent 6751527
Mass
Chemical process engineering
Transport phenomena | 0.793276 | 0.990498 | 0.785738 |
Chemical reactor | A chemical reactor is an enclosed volume in which a chemical reaction takes place. In chemical engineering, it is generally understood to be a process vessel used to carry out a chemical reaction, which is one of the classic unit operations in chemical process analysis. The design of a chemical reactor deals with multiple aspects of chemical engineering. Chemical engineers design reactors to maximize net present value for the given reaction. Designers ensure that the reaction proceeds with the highest efficiency towards the desired output product, producing the highest yield of product while requiring the least amount of money to purchase and operate. Normal operating expenses include energy input, energy removal, raw material costs, labor, etc. Energy changes can come in the form of heating or cooling, pumping to increase pressure, frictional pressure loss or agitation.Chemical reaction engineering is the branch of chemical engineering which deals with chemical reactors and their design, especially by application of chemical kinetics to industrial systems.
Overview
The most common basic types of chemical reactors are tanks (where the reactants mix in the whole volume) and pipes or tubes (for laminar flow reactors and plug flow reactors)
Both types can be used as continuous reactors or batch reactors, and either may accommodate one or more solids (reagents, catalysts, or inert materials), but the reagents and products are typically fluids (liquids or gases). Reactors in continuous processes are typically run at steady-state, whereas reactors in batch processes are necessarily operated in a transient state. When a reactor is brought into operation, either for the first time or after a shutdown, it is in a transient state, and key process variables change with time.
There are three idealised models used to estimate the most important process variables of different chemical reactors:
Batch reactor model,
Continuous stirred-tank reactor model (CSTR), and
Plug flow reactor model (PFR).
Many real-world reactors can be modeled as a combination of these basic types.
Key process variables include:
Residence time (τ, lower case Greek tau)
Volume (V)
Temperature (T)
Pressure (P)
Concentrations of chemical species (C1, C2, C3, ... Cn)
Heat transfer coefficients (h, U)
A tubular reactor can often be a packed bed. In this case, the tube or channel contains particles or pellets, usually a solid catalyst. The reactants, in liquid or gas phase, are pumped through the catalyst bed. A chemical reactor may also be a fluidized bed; see Fluidized bed reactor.
Chemical reactions occurring in a reactor may be exothermic, meaning giving off heat, or endothermic, meaning absorbing heat. A tank reactor may have a cooling or heating jacket or cooling or heating coils (tubes) wrapped around the outside of its vessel wall to cool down or heat up the contents, while tubular reactors can be designed like heat exchangers if the reaction is strongly exothermic, or like furnaces if the reaction is strongly endothermic.
Types
Batch reactor
The simplest type of reactor is a batch reactor. Materials are loaded into a batch reactor, and the reaction proceeds with time. A batch reactor does not reach a steady state, and control of temperature, pressure and volume is often necessary. Many batch reactors therefore have ports for sensors and material input and output. Batch reactors are typically used in small-scale production and reactions with biological materials, such as in brewing, pulping, and production of enzymes. One example of a batch reactor is a pressure reactor.
CSTR (continuous stirred-tank reactor)
In a CSTR, one or more fluid reagents are introduced into a tank reactor which is typically stirred with an impeller to ensure proper mixing of the reagents while the reactor effluent is removed. Dividing the volume of the tank by the average volumetric flow rate through the tank gives the space time, or the time required to process one reactor volume of fluid. Using chemical kinetics, the reaction's expected percent completion can be calculated. Some important aspects of the CSTR:
At steady-state, the mass flow rate in must equal the mass flow rate out, otherwise the tank will overflow or go empty (transient state). While the reactor is in a transient state the model equation must be derived from the differential mass and energy balances.
The reaction proceeds at the reaction rate associated with the final (output) concentration, since the concentration is assumed to be homogenous throughout the reactor.
Often, it is economically beneficial to operate several CSTRs in series. This allows, for example, the first CSTR to operate at a higher reagent concentration and therefore a higher reaction rate. In these cases, the sizes of the reactors may be varied in order to minimize the total capital investment required to implement the process.
It can be demonstrated that an infinite number of infinitely small CSTRs operating in series would be equivalent to a PFR.
The behavior of a CSTR is often approximated or modeled by that of a Continuous Ideally Stirred-Tank Reactor (CISTR). All calculations performed with CISTRs assume perfect mixing. If the residence time is 5-10 times the mixing time, this approximation is considered valid for engineering purposes. The CISTR model is often used to simplify engineering calculations and can be used to describe research reactors. In practice it can only be approached, particularly in industrial size reactors in which the mixing time may be very large.
A loop reactor is a hybrid type of catalytic reactor that physically resembles a tubular reactor, but operates like a CSTR. The reaction mixture is circulated in a loop of tube, surrounded by a jacket for cooling or heating, and there is a continuous flow of starting material in and product out.
PFR (plug flow reactor)
In a PFR, sometimes called continuous tubular reactor (CTR), one or more fluid reagents are pumped through a pipe or tube. The chemical reaction proceeds as the reagents travel through the PFR. In this type of reactor, the changing reaction rate creates a gradient with respect to distance traversed; at the inlet to the PFR the rate is very high, but as the concentrations of the reagents decrease and the concentration of the product(s) increases the reaction rate slows. Some important aspects of the PFR:
The idealized PFR model assumes no axial mixing: any element of fluid traveling through the reactor doesn't mix with fluid upstream or downstream from it, as implied by the term "plug flow".
Reagents may be introduced into the PFR at locations in the reactor other than the inlet. In this way, a higher efficiency may be obtained, or the size and cost of the PFR may be reduced.
A PFR has a higher theoretical efficiency than a CSTR of the same volume. That is, given the same space-time (or residence time), a reaction will proceed to a higher percentage completion in a PFR than in a CSTR. This is not always true for reversible reactions.
For most chemical reactions of industrial interest, it is impossible for the reaction to proceed to 100% completion. The rate of reaction decreases as the reactants are consumed until the point where the system reaches dynamic equilibrium (no net reaction, or change in chemical species occurs). The equilibrium point for most systems is less than 100% complete. For this reason a separation process, such as distillation, often follows a chemical reactor in order to separate any remaining reagents or byproducts from the desired product. These reagents may sometimes be reused at the beginning of the process, such as in the Haber process. In some cases, very large reactors would be necessary to approach equilibrium, and chemical engineers may choose to separate the partially reacted mixture and recycle the leftover reactants.
Under laminar flow conditions, the assumption of plug flow is highly inaccurate, as the fluid traveling through the center of the tube moves much faster than the fluid at the wall. The continuous oscillatory baffled reactor (COBR) achieves thorough mixing by the combination of fluid oscillation and orifice baffles, allowing plug flow to be approximated under laminar flow conditions.
Semibatch reactor
A semibatch reactor is operated with both continuous and batch inputs and outputs. A fermenter, for example, is loaded with a batch of medium and microbes which constantly produces carbon dioxide that must be removed continuously. Similarly, reacting a gas with a liquid is usually difficult, because a large volume of gas is required to react with an equal mass of liquid. To overcome this problem, a continuous feed of gas can be bubbled through a batch of a liquid. In general, in semibatch operation, one chemical reactant is loaded into the reactor and a second chemical is added slowly (for instance, to prevent side reactions), or a product which results from a phase change is continuously removed, for example a gas formed by the reaction, a solid that precipitates out, or a hydrophobic product that forms in an aqueous solution.
Catalytic reactor
Although catalytic reactors are often implemented as plug flow reactors, their analysis requires more complicated treatment. The rate of a catalytic reaction is proportional to the amount of catalyst the reagents contact, as well as the concentration of the reactants. With a solid phase catalyst and fluid phase reagents, this is proportional to the exposed area, efficiency of diffusion of reagents in and products out, and efficacy of mixing. Perfect mixing usually cannot be assumed. Furthermore, a catalytic reaction pathway often occurs in multiple steps with intermediates that are chemically bound to the catalyst; and as the chemical binding to the catalyst is also a chemical reaction, it may affect the kinetics. Catalytic reactions often display so-called falsified kinetics, when the apparent kinetics differ from the actual chemical kinetics due to physical transport effects.
The behavior of the catalyst is also a consideration. Particularly in high-temperature petrochemical processes, catalysts are deactivated by processes such as sintering, coking, and poisoning.
A common example of a catalytic reactor is the catalytic converter that processes toxic components of automobile exhausts. However, most petrochemical reactors are catalytic, and are responsible for most industrial chemical production, with extremely high-volume examples including sulfuric acid, ammonia, reformate/BTEX (benzene, toluene, ethylbenzene and xylene), and fluid catalytic cracking. Various configurations are possible, see Heterogeneous catalytic reactor.
References
External links
Chemical reactors | 0.797164 | 0.98496 | 0.785175 |
ChemSpider | ChemSpider is a freely accessible online database of chemicals owned by the Royal Society of Chemistry. It contains information on more than 100 million molecules from over 270 data sources, each of them receiving a unique identifier called ChemSpider Identifier.
Sources
The database sources include:
Professional databases
EPA DSSTox
U.S. Food and Drug Administration (FDA)
Human Metabolome Database
Journal of Heterocyclic Chemistry
KEGG
KUMGM
LeadScope
LipidMAPS
Marinlit
MDPI
MICAD
MLSMR
MMDB
MOLI
MTDP
Nanogen
Nature Chemical Biology
NCGC
NIAID
National Institutes of Health (NIH)
NINDS Approved Drug Screening Program
NIST
NIST Chemistry WebBook
NMMLSC
NMRShiftDB
PANACHE
PCMD
PDSP
Peptides
Prous Science Drugs of the Future
QSAR
R&D Chemicals
San Diego Center for Chemical Genomics
SGCOxCompounds, SGCStoCompounds
SMID
Specs
Structural Genomics Consortium
SureChem
Synthon-Lab
Thomson Pharma
Total TOSLab Building-Blocks
UM-BBD
UPCMLD
UsefulChem
Web of Science
ChemAid
Crowdsourcing
The ChemSpider database can be updated with user contributions including chemical structure deposition, spectra deposition and user curation. This is a crowdsourcing approach to develop an online chemistry database. Crowdsourced based curation of the data has produced a dictionary of chemical names associated with chemical structures that has been used in text-mining applications of the biomedical and chemical literature.
However, database rights are not waived and a data dump is not available; in fact, the FAQ even states that only limited downloads are allowed: therefore the right to fork is not guaranteed and the project can not be considered free/open.
Features
Searching
A number of available search modules are provided:
The standard search allows querying for systematic names, trade names and synonyms and registry numbers
The advanced search allows interactive searching by chemical structure, chemical substructure, using also molecular formula and molecular weight range, CAS numbers, suppliers, etc. The search can be used to widen or restrict already found results.
Structure searching on mobile devices can be done using free apps for iOS (iPhone/iPod/iPad) and for the Android (operating system).
Chemistry document mark-up
The ChemSpider database has been used in combination with text mining as the basis of chemistry document markup. ChemMantis, the Chemistry Markup And Nomenclature Transformation Integrated System uses algorithms to identify and extract chemical names from documents and web pages and converts the chemical names to chemical structures using name-to-structure conversion algorithms and dictionary look-ups in the ChemSpider database. The result is an integrated system between chemistry documents and information look-up via ChemSpider into over 150 data sources.
SyntheticPages
SyntheticPages is a free interactive database of synthetic chemistry procedures operated by the Royal Society of Chemistry. Users submit synthetic procedures which they have conducted themselves for publication on the site. These procedures may be original works, but they are more often based on literature reactions. Citations to the original published procedure are made where appropriate. They are checked by a scientific editor before posting. The pages do not undergo formal peer-review like a scientific journal article but comments can be made by logged-in users. The comments are also moderated by scientific editors. The intention is to collect practical experience of how to conduct useful chemical synthesis in the lab. While experimental methods published in an ordinary academic journal are listed formally and concisely, the procedures in ChemSpider SyntheticPages are given with more practical detail. Informality is encouraged. Comments by submitters are included as well. Other publications with comparable amounts of detail include Organic Syntheses and Inorganic Syntheses. The SyntheticPages site was originally set up by Professors Kevin Booker-Milburn (University of Bristol), Stephen Caddick (University College London), Peter Scott (University of Warwick) and Max Hammond. In February 2010 a merger was announced with the Royal Society of Chemistry's chemical structure search engine ChemSpider and the formation of ChemSpider|SyntheticPages (CS|SP).
Other services
A number of services are made available online. These include the conversion of chemical names to chemical structures, the generation of SMILES and InChI strings as well as the prediction of many physicochemical parameters and integration to a web service allowing NMR prediction.
History
ChemSpider was acquired by the Royal Society of Chemistry (RSC) in May, 2009. Prior to the acquisition by RSC, ChemSpider was controlled by a private corporation, ChemZoo Inc. The system was first launched in March 2007 in a beta release form and transitioned to release in March 2008.
Open PHACTS
ChemSpider served as the chemical compound repository as part of the Open PHACTS project, an Innovative Medicines Initiative. Open PHACTS developed to open standards, with an open access, semantic web approach to address bottlenecks in small molecule drug discovery - disparate information sources, lack of standards and information overload.
See also
NIST
PubChem
DrugBank
ChEBI
ChEMBL
Software for molecular modeling
References
Chemical databases
Websites which use Wikipedia
Internet properties established in 2007
Royal Society of Chemistry
Biological databases | 0.791228 | 0.99228 | 0.78512 |
Scientific law | Scientific laws or laws of science are statements, based on repeated experiments or observations, that describe or predict a range of natural phenomena. The term law has diverse usage in many cases (approximate, accurate, broad, or narrow) across all fields of natural science (physics, chemistry, astronomy, geoscience, biology). Laws are developed from data and can be further developed through mathematics; in all cases they are directly or indirectly based on empirical evidence. It is generally understood that they implicitly reflect, though they do not explicitly assert, causal relationships fundamental to reality, and are discovered rather than invented.
Scientific laws summarize the results of experiments or observations, usually within a certain range of application. In general, the accuracy of a law does not change when a new theory of the relevant phenomenon is worked out, but rather the scope of the law's application, since the mathematics or statement representing the law does not change. As with other kinds of scientific knowledge, scientific laws do not express absolute certainty, as mathematical laws do. A scientific law may be contradicted, restricted, or extended by future observations.
A law can often be formulated as one or several statements or equations, so that it can predict the outcome of an experiment. Laws differ from hypotheses and postulates, which are proposed during the scientific process before and during validation by experiment and observation. Hypotheses and postulates are not laws, since they have not been verified to the same degree, although they may lead to the formulation of laws. Laws are narrower in scope than scientific theories, which may entail one or several laws. Science distinguishes a law or theory from facts. Calling a law a fact is ambiguous, an overstatement, or an equivocation. The nature of scientific laws has been much discussed in philosophy, but in essence scientific laws are simply empirical conclusions reached by scientific method; they are intended to be neither laden with ontological commitments nor statements of logical absolutes.
Overview
A scientific law always applies to a physical system under repeated conditions, and it implies that there is a causal relationship involving the elements of the system. Factual and well-confirmed statements like "Mercury is liquid at standard temperature and pressure" are considered too specific to qualify as scientific laws. A central problem in the philosophy of science, going back to David Hume, is that of distinguishing causal relationships (such as those implied by laws) from principles that arise due to constant conjunction.
Laws differ from scientific theories in that they do not posit a mechanism or explanation of phenomena: they are merely distillations of the results of repeated observation. As such, the applicability of a law is limited to circumstances resembling those already observed, and the law may be found to be false when extrapolated. Ohm's law only applies to linear networks; Newton's law of universal gravitation only applies in weak gravitational fields; the early laws of aerodynamics, such as Bernoulli's principle, do not apply in the case of compressible flow such as occurs in transonic and supersonic flight; Hooke's law only applies to strain below the elastic limit; Boyle's law applies with perfect accuracy only to the ideal gas, etc. These laws remain useful, but only under the specified conditions where they apply.
Many laws take mathematical forms, and thus can be stated as an equation; for example, the law of conservation of energy can be written as , where is the total amount of energy in the universe. Similarly, the first law of thermodynamics can be written as , and Newton's second law can be written as While these scientific laws explain what our senses perceive, they are still empirical (acquired by observation or scientific experiment) and so are not like mathematical theorems which can be proved purely by mathematics.
Like theories and hypotheses, laws make predictions; specifically, they predict that new observations will conform to the given law. Laws can be falsified if they are found in contradiction with new data.
Some laws are only approximations of other more general laws, and are good approximations with a restricted domain of applicability. For example, Newtonian dynamics (which is based on Galilean transformations) is the low-speed limit of special relativity (since the Galilean transformation is the low-speed approximation to the Lorentz transformation). Similarly, the Newtonian gravitation law is a low-mass approximation of general relativity, and Coulomb's law is an approximation to quantum electrodynamics at large distances (compared to the range of weak interactions). In such cases it is common to use the simpler, approximate versions of the laws, instead of the more accurate general laws.
Laws are constantly being tested experimentally to increasing degrees of precision, which is one of the main goals of science. The fact that laws have never been observed to be violated does not preclude testing them at increased accuracy or in new kinds of conditions to confirm whether they continue to hold, or whether they break, and what can be discovered in the process. It is always possible for laws to be invalidated or proven to have limitations, by repeatable experimental evidence, should any be observed. Well-established laws have indeed been invalidated in some special cases, but the new formulations created to explain the discrepancies generalize upon, rather than overthrow, the originals. That is, the invalidated laws have been found to be only close approximations, to which other terms or factors must be added to cover previously unaccounted-for conditions, e.g. very large or very small scales of time or space, enormous speeds or masses, etc. Thus, rather than unchanging knowledge, physical laws are better viewed as a series of improving and more precise generalizations.
Properties
Scientific laws are typically conclusions based on repeated scientific experiments and observations over many years and which have become accepted universally within the scientific community. A scientific law is "inferred from particular facts, applicable to a defined group or class of phenomena, and expressible by the statement that a particular phenomenon always occurs if certain conditions be present". The production of a summary description of our environment in the form of such laws is a fundamental aim of science.
Several general properties of scientific laws, particularly when referring to laws in physics, have been identified. Scientific laws are:
True, at least within their regime of validity. By definition, there have never been repeatable contradicting observations.
Universal. They appear to apply everywhere in the universe.
Simple. They are typically expressed in terms of a single mathematical equation.
Absolute. Nothing in the universe appears to affect them.
Stable. Unchanged since first discovered (although they may have been shown to be approximations of more accurate laws),
All-encompassing. Everything in the universe apparently must comply with them (according to observations).
Generally conservative of quantity.
Often expressions of existing homogeneities (symmetries) of space and time.
Typically theoretically reversible in time (if non-quantum), although time itself is irreversible.
Broad. In physics, laws exclusively refer to the broad domain of matter, motion, energy, and force itself, rather than more specific systems in the universe, such as living systems, e.g. the mechanics of the human body.
The term "scientific law" is traditionally associated with the natural sciences, though the social sciences also contain laws. For example, Zipf's law is a law in the social sciences which is based on mathematical statistics. In these cases, laws may describe general trends or expected behaviors rather than being absolutes.
In natural science, impossibility assertions come to be widely accepted as overwhelmingly probable rather than considered proved to the point of being unchallengeable. The basis for this strong acceptance is a combination of extensive evidence of something not occurring, combined with an underlying theory, very successful in making predictions, whose assumptions lead logically to the conclusion that something is impossible. While an impossibility assertion in natural science can never be absolutely proved, it could be refuted by the observation of a single counterexample. Such a counterexample would require that the assumptions underlying the theory that implied the impossibility be re-examined.
Some examples of widely accepted impossibilities in physics are perpetual motion machines, which violate the law of conservation of energy, exceeding the speed of light, which violates the implications of special relativity, the uncertainty principle of quantum mechanics, which asserts the impossibility of simultaneously knowing both the position and the momentum of a particle, and Bell's theorem: no physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics.
Laws as consequences of mathematical symmetries
Some laws reflect mathematical symmetries found in nature (e.g. the Pauli exclusion principle reflects identity of electrons, conservation laws reflect homogeneity of space, time, and Lorentz transformations reflect rotational symmetry of spacetime). Many fundamental physical laws are mathematical consequences of various symmetries of space, time, or other aspects of nature. Specifically, Noether's theorem connects some conservation laws to certain symmetries. For example, conservation of energy is a consequence of the shift symmetry of time (no moment of time is different from any other), while conservation of momentum is a consequence of the symmetry (homogeneity) of space (no place in space is special, or different from any other). The indistinguishability of all particles of each fundamental type (say, electrons, or photons) results in the Dirac and Bose quantum statistics which in turn result in the Pauli exclusion principle for fermions and in Bose–Einstein condensation for bosons. Special relativity uses rapidity to express motion according to the symmetries of hyperbolic rotation, a transformation mixing space and time. Symmetry between inertial and gravitational mass results in general relativity.
The inverse square law of interactions mediated by massless bosons is the mathematical consequence of the 3-dimensionality of space.
One strategy in the search for the most fundamental laws of nature is to search for the most general mathematical symmetry group that can be applied to the fundamental interactions.
Laws of physics
Conservation laws
Conservation and symmetry
Conservation laws are fundamental laws that follow from the homogeneity of space, time and phase, in other words symmetry.
Noether's theorem: Any quantity with a continuously differentiable symmetry in the action has an associated conservation law.
Conservation of mass was the first law to be understood since most macroscopic physical processes involving masses, for example, collisions of massive particles or fluid flow, provide the apparent belief that mass is conserved. Mass conservation was observed to be true for all chemical reactions. In general, this is only approximative because with the advent of relativity and experiments in nuclear and particle physics: mass can be transformed into energy and vice versa, so mass is not always conserved but part of the more general conservation of mass–energy.
Conservation of energy, momentum and angular momentum for isolated systems can be found to be symmetries in time, translation, and rotation.
Conservation of charge was also realized since charge has never been observed to be created or destroyed and only found to move from place to place.
Continuity and transfer
Conservation laws can be expressed using the general continuity equation (for a conserved quantity) can be written in differential form as:
where ρ is some quantity per unit volume, J is the flux of that quantity (change in quantity per unit time per unit area). Intuitively, the divergence (denoted ∇⋅) of a vector field is a measure of flux diverging radially outwards from a point, so the negative is the amount piling up at a point; hence the rate of change of density in a region of space must be the amount of flux leaving or collecting in some region (see the main article for details). In the table below, the fluxes flows for various physical quantities in transport, and their associated continuity equations, are collected for comparison.
{| class="wikitable" align="center"
|-
! scope="col" style="width:150px;"| Physics, conserved quantity
! scope="col" style="width:140px;"| Conserved quantity q
! scope="col" style="width:140px;"| Volume density ρ (of q)
! scope="col" style="width:140px;"| Flux J (of q)
! scope="col" style="width:10px;"| Equation
|-
| Hydrodynamics, fluids
| m = mass (kg)
| ρ = volume mass density (kg m−3)
| ρ u, where
u = velocity field of fluid (m s−1)
|
|-
| Electromagnetism, electric charge
| q = electric charge (C)
| ρ = volume electric charge density (C m−3)
| J = electric current density (A m−2)
|
|-
| Thermodynamics, energy
| E = energy (J)
| u = volume energy density (J m−3)
| q = heat flux (W m−2)
|
|-
| Quantum mechanics, probability
| P = (r, t) = ∫|Ψ|2d3r = probability distribution
| ρ = ρ(r, t) = |Ψ|2 = probability density function (m−3),
Ψ = wavefunction of quantum system
| j = probability current/flux
|
|}
More general equations are the convection–diffusion equation and Boltzmann transport equation, which have their roots in the continuity equation.
Laws of classical mechanics
Principle of least action
Classical mechanics, including Newton's laws, Lagrange's equations, Hamilton's equations, etc., can be derived from the following principle:
where is the action; the integral of the Lagrangian
of the physical system between two times t1 and t2. The kinetic energy of the system is T (a function of the rate of change of the configuration of the system), and potential energy is V (a function of the configuration and its rate of change). The configuration of a system which has N degrees of freedom is defined by generalized coordinates q = (q1, q2, ... qN).
There are generalized momenta conjugate to these coordinates, p = (p1, p2, ..., pN), where:
The action and Lagrangian both contain the dynamics of the system for all times. The term "path" simply refers to a curve traced out by the system in terms of the generalized coordinates in the configuration space, i.e. the curve q(t), parameterized by time (see also parametric equation for this concept).
The action is a functional rather than a function, since it depends on the Lagrangian, and the Lagrangian depends on the path q(t), so the action depends on the entire "shape" of the path for all times (in the time interval from t1 to t2). Between two instants of time, there are infinitely many paths, but one for which the action is stationary (to the first order) is the true path. The stationary value for the entire continuum of Lagrangian values corresponding to some path, not just one value of the Lagrangian, is required (in other words it is not as simple as "differentiating a function and setting it to zero, then solving the equations to find the points of maxima and minima etc", rather this idea is applied to the entire "shape" of the function, see calculus of variations for more details on this procedure).
Notice L is not the total energy E of the system due to the difference, rather than the sum:
The following general approaches to classical mechanics are summarized below in the order of establishment. They are equivalent formulations. Newton's is commonly used due to simplicity, but Hamilton's and Lagrange's equations are more general, and their range can extend into other branches of physics with suitable modifications.
{| class="wikitable" align="center"
|-
! scope="col" style="width:600px;" colspan="2"| Laws of motion
|-
|colspan="2" |Principle of least action:
|- valign="top"
| rowspan="2" scope="col" style="width:300px;"|The Euler–Lagrange equations are:
Using the definition of generalized momentum, there is the symmetry:
| style="width:300px;"| Hamilton's equations
The Hamiltonian as a function of generalized coordinates and momenta has the general form:
|-
|Hamilton–Jacobi equation
|- style="border-top: 3px solid;"
| colspan="2" scope="col" style="width:600px;"| Newton's laws
Newton's laws of motion
They are low-limit solutions to relativity. Alternative formulations of Newtonian mechanics are Lagrangian and Hamiltonian mechanics.
The laws can be summarized by two equations (since the 1st is a special case of the 2nd, zero resultant acceleration):
where p = momentum of body, Fij = force on body i by body j, Fji = force on body j by body i.
For a dynamical system the two equations (effectively) combine into one:
in which FE = resultant external force (due to any agent not part of system). Body i does not exert a force on itself.
|}
From the above, any equation of motion in classical mechanics can be derived.
Corollaries in mechanics
Euler's laws of motion
Euler's equations (rigid body dynamics)
Corollaries in fluid mechanics
Equations describing fluid flow in various situations can be derived, using the above classical equations of motion and often conservation of mass, energy and momentum. Some elementary examples follow.
Archimedes' principle
Bernoulli's principle
Poiseuille's law
Stokes' law
Navier–Stokes equations
Faxén's law
Laws of gravitation and relativity
Some of the more famous laws of nature are found in Isaac Newton's theories of (now) classical mechanics, presented in his Philosophiae Naturalis Principia Mathematica, and in Albert Einstein's theory of relativity.
Modern laws
Special relativity
The two postulates of special relativity are not "laws" in themselves, but assumptions of their nature in terms of relative motion.
They can be stated as "the laws of physics are the same in all inertial frames" and "the speed of light is constant and has the same value in all inertial frames".
The said postulates lead to the Lorentz transformations – the transformation law between two frame of references moving relative to each other. For any 4-vector
this replaces the Galilean transformation law from classical mechanics. The Lorentz transformations reduce to the Galilean transformations for low velocities much less than the speed of light c.
The magnitudes of 4-vectors are invariants – not "conserved", but the same for all inertial frames (i.e. every observer in an inertial frame will agree on the same value), in particular if A is the four-momentum, the magnitude can derive the famous invariant equation for mass–energy and momentum conservation (see invariant mass):
in which the (more famous) mass–energy equivalence is a special case.
General relativity
General relativity is governed by the Einstein field equations, which describe the curvature of space-time due to mass–energy equivalent to the gravitational field. Solving the equation for the geometry of space warped due to the mass distribution gives the metric tensor. Using the geodesic equation, the motion of masses falling along the geodesics can be calculated.
Gravitoelectromagnetism
In a relatively flat spacetime due to weak gravitational fields, gravitational analogues of Maxwell's equations can be found; the GEM equations, to describe an analogous gravitomagnetic field. They are well established by the theory, and experimental tests form ongoing research.
{| class="wikitable" align="center"
|- valign="top"
| scope="col" style="width:300px;"|Einstein field equations (EFE):
where Λ = cosmological constant, Rμν = Ricci curvature tensor, Tμν = stress–energy tensor, gμν = metric tensor
| scope="col" style="width:300px;"|Geodesic equation:
where Γ is a Christoffel symbol of the second kind, containing the metric.
|- style="border-top: 3px solid;"
|colspan="2"| GEM Equations
If g the gravitational field and H the gravitomagnetic field, the solutions in these limits are:
where ρ is the mass density and J is the mass current density or mass flux.
|-
|colspan="2"| In addition there is the gravitomagnetic Lorentz force:
where m is the rest mass of the particlce and γ is the Lorentz factor.
|}
Classical laws
Kepler's laws, though originally discovered from planetary observations (also due to Tycho Brahe), are true for any central forces.
{| class="wikitable" align="center"
|- valign="top"
| scope="col" style="width:300px;"|Newton's law of universal gravitation:
For two point masses:
For a non uniform mass distribution of local mass density ρ (r) of body of Volume V, this becomes:
| scope="col" style="width:300px;"| Gauss's law for gravity:
An equivalent statement to Newton's law is:
|- style="border-top: 3px solid;"
| colspan="2" scope="col" style="width:600px;"|Kepler's 1st Law: Planets move in an ellipse, with the star at a focus
where
is the eccentricity of the elliptic orbit, of semi-major axis a and semi-minor axis b, and ℓ is the semi-latus rectum. This equation in itself is nothing physically fundamental; simply the polar equation of an ellipse in which the pole (origin of polar coordinate system) is positioned at a focus of the ellipse, where the orbited star is.
|-
| colspan="2" style="width:600px;"|Kepler's 2nd Law: equal areas are swept out in equal times (area bounded by two radial distances and the orbital circumference):
where L is the orbital angular momentum of the particle (i.e. planet) of mass m about the focus of orbit,
|-
|colspan="2"|Kepler's 3rd Law: The square of the orbital time period T is proportional to the cube of the semi-major axis a:
where M is the mass of the central body (i.e. star).
|}
Thermodynamics
{| class="wikitable" align="center"
|-
!colspan="2"|Laws of thermodynamics
|- valign="top"
| scope="col" style="width:150px;"|First law of thermodynamics: The change in internal energy dU in a closed system is accounted for entirely by the heat δQ absorbed by the system and the work δW done by the system:
Second law of thermodynamics: There are many statements of this law, perhaps the simplest is "the entropy of isolated systems never decreases",
meaning reversible changes have zero entropy change, irreversible process are positive, and impossible process are negative.
| rowspan="2" style="width:150px;"| Zeroth law of thermodynamics: If two systems are in thermal equilibrium with a third system, then they are in thermal equilibrium with one another.
Third law of thermodynamics:
As the temperature T of a system approaches absolute zero, the entropy S approaches a minimum value C: as T → 0, S → C.
|-
| For homogeneous systems the first and second law can be combined into the Fundamental thermodynamic relation:
|- style="border-top: 3px solid;"
| colspan="2" style="width:500px;"|Onsager reciprocal relations: sometimes called the Fourth Law of Thermodynamics
|}
Newton's law of cooling
Fourier's law
Ideal gas law, combines a number of separately developed gas laws;
Boyle's law
Charles's law
Gay-Lussac's law
Avogadro's law, into one
now improved by other equations of state
Dalton's law (of partial pressures)
Boltzmann equation
Carnot's theorem
Kopp's law
Electromagnetism
Maxwell's equations give the time-evolution of the electric and magnetic fields due to electric charge and current distributions. Given the fields, the Lorentz force law is the equation of motion for charges in the fields.
{| class="wikitable" align="center"
|- valign="top"
| scope="col" style="width:300px;"|Maxwell's equations
Gauss's law for electricity
Gauss's law for magnetism
Faraday's law
Ampère's circuital law (with Maxwell's correction)
| scope="col" style="width:300px;"| Lorentz force law:
|- style="border-top: 3px solid;"
| colspan="2" scope="col" style="width:600px;"| Quantum electrodynamics (QED): Maxwell's equations are generally true and consistent with relativity - but they do not predict some observed quantum phenomena (e.g. light propagation as EM waves, rather than photons, see Maxwell's equations for details). They are modified in QED theory.
|}
These equations can be modified to include magnetic monopoles, and are consistent with our observations of monopoles either existing or not existing; if they do not exist, the generalized equations reduce to the ones above, if they do, the equations become fully symmetric in electric and magnetic charges and currents. Indeed, there is a duality transformation where electric and magnetic charges can be "rotated into one another", and still satisfy Maxwell's equations.
Pre-Maxwell laws
These laws were found before the formulation of Maxwell's equations. They are not fundamental, since they can be derived from Maxwell's equations. Coulomb's law can be found from Gauss's Law (electrostatic form) and the Biot–Savart law can be deduced from Ampere's Law (magnetostatic form). Lenz's law and Faraday's law can be incorporated into the Maxwell–Faraday equation. Nonetheless they are still very effective for simple calculations.
Lenz's law
Coulomb's law
Biot–Savart law
Other laws
Ohm's law
Kirchhoff's laws
Joule's law
Photonics
Classically, optics is based on a variational principle: light travels from one point in space to another in the shortest time.
Fermat's principle
In geometric optics laws are based on approximations in Euclidean geometry (such as the paraxial approximation).
Law of reflection
Law of refraction, Snell's law
In physical optics, laws are based on physical properties of materials.
Brewster's angle
Malus's law
Beer–Lambert law
In actuality, optical properties of matter are significantly more complex and require quantum mechanics.
Laws of quantum mechanics
Quantum mechanics has its roots in postulates. This leads to results which are not usually called "laws", but hold the same status, in that all of quantum mechanics follows from them. These postulates can be summarized as follows:
The state of a physical system, be it a particle or a system of many particles, is described by a wavefunction.
Every physical quantity is described by an operator acting on the system; the measured quantity has a probabilistic nature.
The wavefunction obeys the Schrödinger equation. Solving this wave equation predicts the time-evolution of the system's behavior, analogous to solving Newton's laws in classical mechanics.
Two identical particles, such as two electrons, cannot be distinguished from one another by any means. Physical systems are classified by their symmetry properties.
These postulates in turn imply many other phenomena, e.g., uncertainty principles and the Pauli exclusion principle.
{| class="wikitable" align="center"
|- valign="top"
| style="width:300px;"| Quantum mechanics, Quantum field theory
Schrödinger equation (general form): Describes the time dependence of a quantum mechanical system.
The Hamiltonian (in quantum mechanics) H is a self-adjoint operator acting on the state space, (see Dirac notation) is the instantaneous quantum state vector at time t, position r, i is the unit imaginary number, is the reduced Planck constant.
| rowspan="2" scope="col" style="width:300px;"|Wave–particle duality
Planck–Einstein law: the energy of photons is proportional to the frequency of the light (the constant is the Planck constant, h).
De Broglie wavelength: this laid the foundations of wave–particle duality, and was the key concept in the Schrödinger equation,
Heisenberg uncertainty principle: Uncertainty in position multiplied by uncertainty in momentum is at least half of the reduced Planck constant, similarly for time and energy;
The uncertainty principle can be generalized to any pair of observables – see main article.
|-
| Wave mechanics
Schrödinger equation (original form):
|- style="border-top: 3px solid;"
| colspan="2" style="width:600px;"| Pauli exclusion principle: No two identical fermions can occupy the same quantum state (bosons can). Mathematically, if two particles are interchanged, fermionic wavefunctions are anti-symmetric, while bosonic wavefunctions are symmetric:
where ri is the position of particle i, and s is the spin of the particle. There is no way to keep track of particles physically, labels are only used mathematically to prevent confusion.
|}
Radiation laws
Applying electromagnetism, thermodynamics, and quantum mechanics, to atoms and molecules, some laws of electromagnetic radiation and light are as follows.
Stefan–Boltzmann law
Planck's law of black-body radiation
Wien's displacement law
Radioactive decay law
Laws of chemistry
Chemical laws are those laws of nature relevant to chemistry. Historically, observations led to many empirical laws, though now it is known that chemistry has its foundations in quantum mechanics.
Quantitative analysis
The most fundamental concept in chemistry is the law of conservation of mass, which states that there is no detectable change in the quantity of matter during an ordinary chemical reaction. Modern physics shows that it is actually energy that is conserved, and that energy and mass are related; a concept which becomes important in nuclear chemistry. Conservation of energy leads to the important concepts of equilibrium, thermodynamics, and kinetics.
Additional laws of chemistry elaborate on the law of conservation of mass. Joseph Proust's law of definite composition says that pure chemicals are composed of elements in a definite formulation; we now know that the structural arrangement of these elements is also important.
Dalton's law of multiple proportions says that these chemicals will present themselves in proportions that are small whole numbers; although in many systems (notably biomacromolecules and minerals) the ratios tend to require large numbers, and are frequently represented as a fraction.
The law of definite composition and the law of multiple proportions are the first two of the three laws of stoichiometry, the proportions by which the chemical elements combine to form chemical compounds. The third law of stoichiometry is the law of reciprocal proportions, which provides the basis for establishing equivalent weights for each chemical element. Elemental equivalent weights can then be used to derive atomic weights for each element.
More modern laws of chemistry define the relationship between energy and its transformations.
Reaction kinetics and equilibria
In equilibrium, molecules exist in mixture defined by the transformations possible on the timescale of the equilibrium, and are in a ratio defined by the intrinsic energy of the molecules—the lower the intrinsic energy, the more abundant the molecule. Le Chatelier's principle states that the system opposes changes in conditions from equilibrium states, i.e. there is an opposition to change the state of an equilibrium reaction.
Transforming one structure to another requires the input of energy to cross an energy barrier; this can come from the intrinsic energy of the molecules themselves, or from an external source which will generally accelerate transformations. The higher the energy barrier, the slower the transformation occurs.
There is a hypothetical intermediate, or transition structure, that corresponds to the structure at the top of the energy barrier. The Hammond–Leffler postulate states that this structure looks most similar to the product or starting material which has intrinsic energy closest to that of the energy barrier. Stabilizing this hypothetical intermediate through chemical interaction is one way to achieve catalysis.
All chemical processes are reversible (law of microscopic reversibility) although some processes have such an energy bias, they are essentially irreversible.
The reaction rate has the mathematical parameter known as the rate constant. The Arrhenius equation gives the temperature and activation energy dependence of the rate constant, an empirical law.
Thermochemistry
Dulong–Petit law
Gibbs–Helmholtz equation
Hess's law
Gas laws
Raoult's law
Henry's law
Chemical transport
Fick's laws of diffusion
Graham's law
Lamm equation
Laws of biology
Ecology
Competitive exclusion principle or Gause's law
Genetics
Mendelian laws (Dominance and Uniformity, segregation of genes, and Independent Assortment)
Hardy–Weinberg principle
Natural selection
Whether or not Natural Selection is a “law of nature” is controversial among biologists. Henry Byerly, an American philosopher known for his work on evolutionary theory, discussed the problem of interpreting a principle of natural selection as a law. He suggested a formulation of natural selection as a framework principle that can contribute to a better understanding of evolutionary theory. His approach was to express relative fitness, the propensity of a genotype to increase in proportionate representation in a competitive environment, as a function of adaptedness (adaptive design) of the organism.
Laws of Earth sciences
Geography
Arbia's law of geography
Tobler's first law of geography
Tobler's second law of geography
Geology
Archie's law
Buys Ballot's law
Birch's law
Byerlee's law
Principle of original horizontality
Law of superposition
Principle of lateral continuity
Principle of cross-cutting relationships
Principle of faunal succession
Principle of inclusions and components
Walther's law
Other fields
Some mathematical theorems and axioms are referred to as laws because they provide logical foundation to empirical laws.
Examples of other observed phenomena sometimes described as laws include the Titius–Bode law of planetary positions, Zipf's law of linguistics, and Moore's law of technological growth. Many of these laws fall within the scope of uncomfortable science. Other laws are pragmatic and observational, such as the law of unintended consequences. By analogy, principles in other fields of study are sometimes loosely referred to as "laws". These include Occam's razor as a principle of philosophy and the Pareto principle of economics.
History
The observation and detection of underlying regularities in nature date from prehistoric times – the recognition of cause-and-effect relationships implicitly recognises the existence of laws of nature. The recognition of such regularities as independent scientific laws per se, though, was limited by their entanglement in animism, and by the attribution of many effects that do not have readily obvious causes—such as physical phenomena—to the actions of gods, spirits, supernatural beings, etc. Observation and speculation about nature were intimately bound up with metaphysics and morality.
In Europe, systematic theorizing about nature (physis) began with the early Greek philosophers and scientists and continued into the Hellenistic and Roman imperial periods, during which times the intellectual influence of Roman law increasingly became paramount.The formula "law of nature" first appears as "a live metaphor" favored by Latin poets Lucretius, Virgil, Ovid, Manilius, in time gaining a firm theoretical presence in the prose treatises of Seneca and Pliny. Why this Roman origin? According to [historian and classicist Daryn] Lehoux's persuasive narrative, the idea was made possible by the pivotal role of codified law and forensic argument in Roman life and culture.
For the Romans ... the place par excellence where ethics, law, nature, religion and politics overlap is the law court. When we read Seneca's Natural Questions, and watch again and again just how he applies standards of evidence, witness evaluation, argument and proof, we can recognize that we are reading one of the great Roman rhetoricians of the age, thoroughly immersed in forensic method. And not Seneca alone. Legal models of scientific judgment turn up all over the place, and for example prove equally integral to Ptolemy's approach to verification, where the mind is assigned the role of magistrate, the senses that of disclosure of evidence, and dialectical reason that of the law itself.
The precise formulation of what are now recognized as modern and valid statements of the laws of nature dates from the 17th century in Europe, with the beginning of accurate experimentation and the development of advanced forms of mathematics. During this period, natural philosophers such as Isaac Newton (1642–1727) were influenced by a religious view – stemming from medieval concepts of divine law – which held that God had instituted absolute, universal and immutable physical laws. In chapter 7 of The World, René Descartes (1596–1650) described "nature" as matter itself, unchanging as created by God, thus changes in parts "are to be attributed to nature. The rules according to which these changes take place I call the 'laws of nature'." The modern scientific method which took shape at this time (with Francis Bacon (1561–1626) and Galileo (1564–1642)) contributed to a trend of separating science from theology, with minimal speculation about metaphysics and ethics. (Natural law in the political sense, conceived as universal (i.e., divorced from sectarian religion and accidents of place), was also elaborated in this period by scholars such as Grotius (1583–1645), Spinoza (1632–1677), and Hobbes (1588–1679).)
The distinction between natural law in the political-legal sense and law of nature or physical law in the scientific sense is a modern one, both concepts being equally derived from physis, the Greek word (translated into Latin as natura) for nature.
See also
References
Further reading
Francis Bacon (1620). Novum Organum.
External links
Physics Formulary, a useful book in different formats containing many or the physical laws and formulae.
Eformulae.com, website containing most of the formulae in different disciplines.
Stanford Encyclopedia of Philosophy: "Laws of Nature" by John W. Carroll.
Baaquie, Belal E. "Laws of Physics : A Primer". Core Curriculum, National University of Singapore.
Francis, Erik Max. "The laws list".. Physics. Alcyone Systems
Pazameta, Zoran. "The laws of nature". Committee for the scientific investigation of Claims of the Paranormal.
The Internet Encyclopedia of Philosophy. "Laws of Nature" – By Norman Swartz
Causality
Metaphysics of science
Philosophy of science
Principles
Laws in science
Scientific method | 0.787288 | 0.997211 | 0.785093 |
Microevolution | Microevolution is the change in allele frequencies that occurs over time within a population. This change is due to four different processes: mutation, selection (natural and artificial), gene flow and genetic drift. This change happens over a relatively short (in evolutionary terms) amount of time compared to the changes termed macroevolution.
Population genetics is the branch of biology that provides the mathematical structure for the study of the process of microevolution. Ecological genetics concerns itself with observing microevolution in the wild. Typically, observable instances of evolution are examples of microevolution; for example, bacterial strains that have antibiotic resistance.
Microevolution provides the raw material for macroevolution.
Difference from macroevolution
Macroevolution is guided by sorting of interspecific variation ("species selection"), as opposed to sorting of intraspecific variation in microevolution. Species selection may occur as (a) effect-macroevolution, where organism-level traits (aggregate traits) affect speciation and extinction rates, and (b) strict-sense species selection, where species-level traits (e.g. geographical range) affect speciation and extinction rates. Macroevolution does not produce evolutionary novelties, but it determines their proliferation within the clades in which they evolved, and it adds species-level traits as non-organismic factors of sorting to this process.
Four processes
Mutation
Mutations are changes in the DNA sequence of a cell's genome and are caused by radiation, viruses, transposons and mutagenic chemicals, as well as errors that occur during meiosis or DNA replication. Errors are introduced particularly often in the process of DNA replication, in the polymerization of the second strand. These errors can also be induced by the organism itself, by cellular processes such as hypermutation. Mutations can affect the phenotype of an organism, especially if they occur within the protein coding sequence of a gene. Error rates are usually very low—1 error in every 10–100 million bases—due to the proofreading ability of DNA polymerases. (Without proofreading error rates are a thousandfold higher; because many viruses rely on DNA and RNA polymerases that lack proofreading ability, they experience higher mutation rates.) Processes that increase the rate of changes in DNA are called mutagenic: mutagenic chemicals promote errors in DNA replication, often by interfering with the structure of base-pairing, while UV radiation induces mutations by causing damage to the DNA structure. Chemical damage to DNA occurs naturally as well, and cells use DNA repair mechanisms to repair mismatches and breaks in DNA—nevertheless, the repair sometimes fails to return the DNA to its original sequence.
In organisms that use chromosomal crossover to exchange DNA and recombine genes, errors in alignment during meiosis can also cause mutations. Errors in crossover are especially likely when similar sequences cause partner chromosomes to adopt a mistaken alignment making some regions in genomes more prone to mutating in this way. These errors create large structural changes in DNA sequence—duplications, inversions or deletions of entire regions, or the accidental exchanging of whole parts between different chromosomes (called translocation).
Mutation can result in several different types of change in DNA sequences; these can either have no effect, alter the product of a gene, or prevent the gene from functioning. Studies in the fly Drosophila melanogaster suggest that if a mutation changes a protein produced by a gene, this will probably be harmful, with about 70 percent of these mutations having damaging effects, and the remainder being either neutral or weakly beneficial. Due to the damaging effects that mutations can have on cells, organisms have evolved mechanisms such as DNA repair to remove mutations. Therefore, the optimal mutation rate for a species is a trade-off between costs of a high mutation rate, such as deleterious mutations, and the metabolic costs of maintaining systems to reduce the mutation rate, such as DNA repair enzymes. Viruses that use RNA as their genetic material have rapid mutation rates, which can be an advantage since these viruses will evolve constantly and rapidly, and thus evade the defensive responses of e.g. the human immune system.
Mutations can involve large sections of DNA becoming duplicated, usually through genetic recombination. These duplications are a major source of raw material for evolving new genes, with tens to hundreds of genes duplicated in animal genomes every million years. Most genes belong to larger families of genes of shared ancestry. Novel genes are produced by several methods, commonly through the duplication and mutation of an ancestral gene, or by recombining parts of different genes to form new combinations with new functions.
Here, domains act as modules, each with a particular and independent function, that can be mixed together to produce genes encoding new proteins with novel properties. For example, the human eye uses four genes to make structures that sense light: three for color vision and one for night vision; all four arose from a single ancestral gene. Another advantage of duplicating a gene (or even an entire genome) is that this increases redundancy; this allows one gene in the pair to acquire a new function while the other copy performs the original function. Other types of mutation occasionally create new genes from previously noncoding DNA.
Selection
Selection is the process by which heritable traits that make it more likely for an organism to survive and successfully reproduce become more common in a population over successive generations.
It is sometimes valuable to distinguish between naturally occurring selection, natural selection, and selection that is a manifestation of choices made by humans, artificial selection. This distinction is rather diffuse. Natural selection is nevertheless the dominant part of selection.
The natural genetic variation within a population of organisms means that some individuals will survive more successfully than others in their current environment. Factors which affect reproductive success are also important, an issue which Charles Darwin developed in his ideas on sexual selection.
Natural selection acts on the phenotype, or the observable characteristics of an organism, but the genetic (heritable) basis of any phenotype which gives a reproductive advantage will become more common in a population (see allele frequency). Over time, this process can result in adaptations that specialize organisms for particular ecological niches and may eventually result in the speciation (the emergence of new species).
Natural selection is one of the cornerstones of modern biology. The term was introduced by Darwin in his groundbreaking 1859 book On the Origin of Species, in which natural selection was described by analogy to artificial selection, a process by which animals and plants with traits considered desirable by human breeders are systematically favored for reproduction. The concept of natural selection was originally developed in the absence of a valid theory of heredity; at the time of Darwin's writing, nothing was known of modern genetics. The union of traditional Darwinian evolution with subsequent discoveries in classical and molecular genetics is termed the modern evolutionary synthesis. Natural selection remains the primary explanation for adaptive evolution.
Genetic drift
Genetic drift is the change in the relative frequency in which a gene variant (allele) occurs in a population due to random sampling. That is, the alleles in the offspring in the population are a random sample of those in the parents. And chance has a role in determining whether a given individual survives and reproduces. A population's allele frequency is the fraction or percentage of its gene copies compared to the total number of gene alleles that share a particular form.
Genetic drift is an evolutionary process which leads to changes in allele frequencies over time. It may cause gene variants to disappear completely, and thereby reduce genetic variability. In contrast to natural selection, which makes gene variants more common or less common depending on their reproductive success, the changes due to genetic drift are not driven by environmental or adaptive pressures, and may be beneficial, neutral, or detrimental to reproductive success.
The effect of genetic drift is larger in small populations, and smaller in large populations. Vigorous debates wage among scientists over the relative importance of genetic drift compared with natural selection. Ronald Fisher held the view that genetic drift plays at the most a minor role in evolution, and this remained the dominant view for several decades. In 1968 Motoo Kimura rekindled the debate with his neutral theory of molecular evolution which claims that most of the changes in the genetic material are caused by genetic drift. The predictions of neutral theory, based on genetic drift, do not fit recent data on whole genomes well: these data suggest that the frequencies of neutral alleles change primarily due to selection at linked sites, rather than due to genetic drift by means of sampling error.
Gene flow
Gene flow is the exchange of genes between populations, which are usually of the same species. Examples of gene flow within a species include the migration and then breeding of organisms, or the exchange of pollen. Gene transfer between species includes the formation of hybrid organisms and horizontal gene transfer.
Migration into or out of a population can change allele frequencies, as well as introducing genetic variation into a population. Immigration may add new genetic material to the established gene pool of a population. Conversely, emigration may remove genetic material. As barriers to reproduction between two diverging populations are required for the populations to become new species, gene flow may slow this process by spreading genetic differences between the populations. Gene flow is hindered by mountain ranges, oceans and deserts or even man-made structures such as the Great Wall of China, which has hindered the flow of plant genes.
Depending on how far two species have diverged since their most recent common ancestor, it may still be possible for them to produce offspring, as with horses and donkeys mating to produce mules. Such hybrids are generally infertile, due to the two different sets of chromosomes being unable to pair up during meiosis. In this case, closely related species may regularly interbreed, but hybrids will be selected against and the species will remain distinct. However, viable hybrids are occasionally formed and these new species can either have properties intermediate between their parent species, or possess a totally new phenotype. The importance of hybridization in developing new species of animals is unclear, although cases have been seen in many types of animals, with the gray tree frog being a particularly well-studied example.
Hybridization is, however, an important means of speciation in plants, since polyploidy (having more than two copies of each chromosome) is tolerated in plants more readily than in animals. Polyploidy is important in hybrids as it allows reproduction, with the two different sets of chromosomes each being able to pair with an identical partner during meiosis. Polyploid hybrids also have more genetic diversity, which allows them to avoid inbreeding depression in small populations.
Horizontal gene transfer is the transfer of genetic material from one organism to another organism that is not its offspring; this is most common among bacteria. In medicine, this contributes to the spread of antibiotic resistance, as when one bacteria acquires resistance genes it can rapidly transfer them to other species. Horizontal transfer of genes from bacteria to eukaryotes such as the yeast Saccharomyces cerevisiae and the adzuki bean beetle Callosobruchus chinensis may also have occurred. An example of larger-scale transfers are the eukaryotic bdelloid rotifers, which appear to have received a range of genes from bacteria, fungi, and plants. Viruses can also carry DNA between organisms, allowing transfer of genes even across biological domains. Large-scale gene transfer has also occurred between the ancestors of eukaryotic cells and prokaryotes, during the acquisition of chloroplasts and mitochondria.
Gene flow is the transfer of alleles from one population to another.
Migration into or out of a population may be responsible for a marked change in allele frequencies. Immigration may also result in the addition of new genetic variants to the established gene pool of a particular species or population.
There are a number of factors that affect the rate of gene flow between different populations. One of the most significant factors is mobility, as greater mobility of an individual tends to give it greater migratory potential. Animals tend to be more mobile than plants, although pollen and seeds may be carried great distances by animals or wind.
Maintained gene flow between two populations can also lead to a combination of the two gene pools, reducing the genetic variation between the two groups. It is for this reason that gene flow strongly acts against speciation, by recombining the gene pools of the groups, and thus, repairing the developing differences in genetic variation that would have led to full speciation and creation of daughter species.
For example, if a species of grass grows on both sides of a highway, pollen is likely to be transported from one side to the other and vice versa. If this pollen is able to fertilise the plant where it ends up and produce viable offspring, then the alleles in the pollen have effectively been able to move from the population on one side of the highway to the other.
Origin and extended use of the term
Origin
The term microevolution was first used by botanist Robert Greenleaf Leavitt in the journal Botanical Gazette in 1909, addressing what he called the "mystery" of how formlessness gives rise to form.
..The production of form from formlessness in the egg-derived individual, the multiplication of parts and the orderly creation of diversity among them, in an actual evolution, of which anyone may ascertain the facts, but of which no one has dissipated the mystery in any significant measure. This microevolution forms an integral part of the grand evolution problem and lies at the base of it, so that we shall have to understand the minor process before we can thoroughly comprehend the more general one...
However, Leavitt was using the term to describe what we would now call developmental biology; it was not until Russian Entomologist Yuri Filipchenko used the terms "macroevolution" and "microevolution" in 1927 in his German language work, Variabilität und Variation, that it attained its modern usage. The term was later brought into the English-speaking world by Filipchenko's student Theodosius Dobzhansky in his book Genetics and the Origin of Species (1937).
Use in creationism
In young Earth creationism and baraminology a central tenet is that evolution can explain diversity in a limited number of created kinds which can interbreed (which they call "microevolution") while the formation of new "kinds" (which they call "macroevolution") is impossible. This acceptance of "microevolution" only within a "kind" is also typical of old Earth creationism.
Scientific organizations such as the American Association for the Advancement of Science describe microevolution as small scale change within species, and macroevolution as the formation of new species, but otherwise not being different from microevolution. In macroevolution, an accumulation of microevolutionary changes leads to speciation. The main difference between the two processes is that one occurs within a few generations, whilst the other takes place over thousands of years (i.e. a quantitative difference). Essentially they describe the same process; although evolution beyond the species level results in beginning and ending generations which could not interbreed, the intermediate generations could.
Opponents to creationism argue that changes in the number of chromosomes can be accounted for by intermediate stages in which a single chromosome divides in generational stages, or multiple chromosomes fuse, and cite the chromosome difference between humans and the other great apes as an example. Creationists insist that since the actual divergence between the other great apes and humans was not observed, the evidence is circumstantial.
Describing the fundamental similarity between macro and microevolution in his authoritative textbook "Evolutionary Biology," biologist Douglas Futuyma writes,
Contrary to the claims of some antievolution proponents, evolution of life forms beyond the species level (i.e. speciation) has indeed been observed and documented by scientists on numerous occasions. In creation science, creationists accepted speciation as occurring within a "created kind" or "baramin", but objected to what they called "third level-macroevolution" of a new genus or higher rank in taxonomy. There is ambiguity in the ideas as to where to draw a line on "species", "created kinds", and what events and lineages fall within the rubric of microevolution or macroevolution.
See also
Punctuated equilibrium - due to gene flow, major evolutionary changes may be rare
References
External links
Microevolution (UC Berkeley)
Microevolution vs Macroevolution
Evolutionary biology concepts
Population genetics | 0.796 | 0.98612 | 0.784952 |
Rote learning | Rote learning is a memorization technique based on repetition. The method rests on the premise that the recall of repeated material becomes faster the more one repeats it. Some of the alternatives to rote learning include meaningful learning, associative learning, spaced repetition and active learning.
Versus critical thinking
Rote learning is widely used in the mastery of foundational knowledge. Examples of school topics where rote learning is frequently used include phonics in reading, the periodic table in chemistry, multiplication tables in mathematics, anatomy in medicine, cases or statutes in law, basic formulae in any science, etc. By definition, rote learning eschews comprehension, so by itself it is an ineffective tool in mastering any complex subject at an advanced level. For instance, one illustration of rote learning can be observed in preparing quickly for exams, a technique which may be colloquially referred to as "cramming".
Rote learning is sometimes disparaged with the derogative terms parrot fashion, regurgitation, cramming, or mugging because one who engages in rote learning may give the wrong impression of having understood what they have written or said. It is strongly discouraged by many new curriculum standards. For example, science and mathematics standards in the United States specifically emphasize the importance of deep understanding over the mere recall of facts, which is seen to be less important. The National Council of Teachers of Mathematics stated: More than ever, mathematics must include the mastery of concepts instead of mere memorization and the following of procedures. More than ever, school mathematics must include an understanding of how to use technology to arrive meaningfully at solutions to problems instead of endless attention to increasingly outdated computational tedium.However, advocates of traditional education have criticized the new American standards as slighting learning basic facts and elementary arithmetic, and replacing content with process-based skills. In math and science, rote methods are often used, for example to memorize formulas. There is greater understanding if students commit a formula to memory through exercises that use the formula rather than through rote repetition of the formula. Newer standards often recommend that students derive formulas themselves to achieve the best understanding. Nothing is faster than rote learning if a formula must be learned quickly for an imminent test and rote methods can be helpful for committing an understood fact to memory. However, students who learn with understanding are able to transfer their knowledge to tasks requiring problem-solving with greater success than those who learn only by rote.
On the other side, those who disagree with the inquiry-based philosophy maintain that students must first develop computational skills before they can understand concepts of mathematics. These people would argue that time is better spent practicing skills rather than in investigations inventing alternatives, or justifying more than one correct answer or method. In this view, estimating answers is insufficient and, in fact, is considered to be dependent on strong foundational skills. Learning abstract concepts of mathematics is perceived to depend on a solid base of knowledge of the tools of the subject. Thus, these people believe that rote learning is an important part of the learning process.
In computer science
Rote learning is also used to describe a simple learning pattern used in machine learning, although it does not involve repetition, unlike the usual meaning of rote learning. The machine is programmed to keep a history of calculations and compare new input against its history of inputs and outputs, retrieving the stored output if present. This pattern requires that the machine can be modeled as a pure function — always producing same output for same input — and can be formally described as follows:
f() → → store ((),())
Rote learning was used by Samuel's Checkers on an IBM 701, a milestone in the use of artificial intelligence.
Learning methods for school
The flashcard, outline, and mnemonic device are traditional tools for memorizing course material and are examples of rote learning.
See also
References
External links
Education reform
Learning methods
Memorization
Pedagogy | 0.788797 | 0.995012 | 0.784863 |
Solvation | Solvation describes the interaction of a solvent with dissolved molecules. Both ionized and uncharged molecules interact strongly with a solvent, and the strength and nature of this interaction influence many properties of the solute, including solubility, reactivity, and color, as well as influencing the properties of the solvent such as its viscosity and density. If the attractive forces between the solvent and solute particles are greater than the attractive forces holding the solute particles together, the solvent particles pull the solute particles apart and surround them. The surrounded solute particles then move away from the solid solute and out into the solution. Ions are surrounded by a concentric shell of solvent. Solvation is the process of reorganizing solvent and solute molecules into solvation complexes and involves bond formation, hydrogen bonding, and van der Waals forces. Solvation of a solute by water is called hydration.
Solubility of solid compounds depends on a competition between lattice energy and solvation, including entropy effects related to changes in the solvent structure.
Distinction from solubility
By an IUPAC definition, solvation is an interaction of a solute with the solvent, which leads to stabilization of the solute species in the solution. In the solvated state, an ion or molecule in a solution is surrounded or complexed by solvent molecules. Solvated species can often be described by coordination number, and the complex stability constants. The concept of the solvation interaction can also be applied to an insoluble material, for example, solvation of functional groups on a surface of ion-exchange resin.
Solvation is, in concept, distinct from solubility. Solvation or dissolution is a kinetic process and is quantified by its rate. Solubility quantifies the dynamic equilibrium state achieved when the rate of dissolution equals the rate of precipitation. The consideration of the units makes the distinction clearer. The typical unit for dissolution rate is mol/s. The units for solubility express a concentration: mass per volume (mg/mL), molarity (mol/L), etc.
Solvents and intermolecular interactions
Solvation involves different types of intermolecular interactions:
Hydrogen bonding
Ion–dipole interactions
The van der Waals forces, which consist of dipole–dipole, dipole–induced dipole, and induced dipole–induced dipole interactions.
Which of these forces are at play depends on the molecular structure and properties of the solvent and solute. The similarity or complementary character of these properties between solvent and solute determines how well a solute can be solvated by a particular solvent.
Solvent polarity is the most important factor in determining how well it solvates a particular solute. Polar solvents have molecular dipoles, meaning that part of the solvent molecule has more electron density than another part of the molecule. The part with more electron density will experience a partial negative charge while the part with less electron density will experience a partial positive charge. Polar solvent molecules can solvate polar solutes and ions because they can orient the appropriate partially charged portion of the molecule towards the solute through electrostatic attraction. This stabilizes the system and creates a solvation shell (or hydration shell in the case of water) around each particle of solute. The solvent molecules in the immediate vicinity of a solute particle often have a much different ordering than the rest of the solvent, and this area of differently ordered solvent molecules is called the cybotactic region. Water is the most common and well-studied polar solvent, but others exist, such as ethanol, methanol, acetone, acetonitrile, and dimethyl sulfoxide. Polar solvents are often found to have a high dielectric constant, although other solvent scales are also used to classify solvent polarity. Polar solvents can be used to dissolve inorganic or ionic compounds such as salts. The conductivity of a solution depends on the solvation of its ions. Nonpolar solvents cannot solvate ions, and ions will be found as ion pairs.
Hydrogen bonding among solvent and solute molecules depends on the ability of each to accept H-bonds, donate H-bonds, or both. Solvents that can donate H-bonds are referred to as protic, while solvents that do not contain a polarized bond to a hydrogen atom and cannot donate a hydrogen bond are called aprotic. H-bond donor ability is classified on a scale (α). Protic solvents can solvate solutes that can accept hydrogen bonds. Similarly, solvents that can accept a hydrogen bond can solvate H-bond-donating solutes. The hydrogen bond acceptor ability of a solvent is classified on a scale (β). Solvents such as water can both donate and accept hydrogen bonds, making them excellent at solvating solutes that can donate or accept (or both) H-bonds.
Some chemical compounds experience solvatochromism, which is a change in color due to solvent polarity. This phenomenon illustrates how different solvents interact differently with the same solute. Other solvent effects include conformational or isomeric preferences and changes in the acidity of a solute.
Solvation energy and thermodynamic considerations
The solvation process will be thermodynamically favored only if the overall Gibbs energy of the solution is decreased, compared to the Gibbs energy of the separated solvent and solid (or gas or liquid). This means that the change in enthalpy minus the change in entropy (multiplied by the absolute temperature) is a negative value, or that the Gibbs energy of the system decreases. A negative Gibbs energy indicates a spontaneous process but does not provide information about the rate of dissolution.
Solvation involves multiple steps with different energy consequences. First, a cavity must form in the solvent to make space for a solute. This is both entropically and enthalpically unfavorable, as solvent ordering increases and solvent-solvent interactions decrease. Stronger interactions among solvent molecules leads to a greater enthalpic penalty for cavity formation. Next, a particle of solute must separate from the bulk. This is enthalpically unfavorable since solute-solute interactions decrease, but when the solute particle enters the cavity, the resulting solvent-solute interactions are enthalpically favorable. Finally, as solute mixes into solvent, there is an entropy gain.
The enthalpy of solution is the solution enthalpy minus the enthalpy of the separate systems, whereas the entropy of solution is the corresponding difference in entropy. The solvation energy (change in Gibbs free energy) is the change in enthalpy minus the product of temperature (in Kelvin) times the change in entropy. Gases have a negative entropy of solution, due to the decrease in gaseous volume as gas dissolves. Since their enthalpy of solution does not decrease too much with temperature, and their entropy of solution is negative and does not vary appreciably with temperature, most gases are less soluble at higher temperatures.
Enthalpy of solvation can help explain why solvation occurs with some ionic lattices but not with others. The difference in energy between that which is necessary to release an ion from its lattice and the energy given off when it combines with a solvent molecule is called the enthalpy change of solution. A negative value for the enthalpy change of solution corresponds to an ion that is likely to dissolve, whereas a high positive value means that solvation will not occur. It is possible that an ion will dissolve even if it has a positive enthalpy value. The extra energy required comes from the increase in entropy that results when the ion dissolves. The introduction of entropy makes it harder to determine by calculation alone whether a substance will dissolve or not. A quantitative measure for solvation power of solvents is given by donor numbers.
Although early thinking was that a higher ratio of a cation's ion charge to ionic radius, or the charge density, resulted in more solvation, this does not stand up to scrutiny for ions like iron(III) or lanthanides and actinides, which are readily hydrolyzed to form insoluble (hydrous) oxides. As these are solids, it is apparent that they are not solvated.
Strong solvent–solute interactions make the process of solvation more favorable. One way to compare how favorable the dissolution of a solute is in different solvents is to consider the free energy of transfer. The free energy of transfer quantifies the free energy difference between dilute solutions of a solute in two different solvents. This value essentially allows for comparison of solvation energies without including solute-solute interactions.
In general, thermodynamic analysis of solutions is done by modeling them as reactions. For example, if you add sodium chloride to water, the salt will dissociate into the ions sodium(+aq) and chloride(-aq). The equilibrium constant for this dissociation can be predicted by the change in Gibbs energy of this reaction.
The Born equation is used to estimate Gibbs free energy of solvation of a gaseous ion.
Recent simulation studies have shown that the variation in solvation energy between the ions and the surrounding water molecules underlies the mechanism of the Hofmeister series.
Macromolecules and assemblies
Solvation (specifically, hydration) is important for many biological structures and processes. For instance, solvation of ions and/or of charged macromolecules, like DNA and proteins, in aqueous solutions influences the formation of heterogeneous assemblies, which may be responsible for biological function. As another example, protein folding occurs spontaneously, in part because of a favorable change in the interactions between the protein and the surrounding water molecules. Folded proteins are stabilized by 5-10 kcal/mol relative to the unfolded state due to a combination of solvation and the stronger intramolecular interactions in the folded protein structure, including hydrogen bonding. Minimizing the number of hydrophobic side chains exposed to water by burying them in the center of a folded protein is a driving force related to solvation.
Solvation also affects host–guest complexation. Many host molecules have a hydrophobic pore that readily encapsulates a hydrophobic guest. These interactions can be used in applications such as drug delivery, such that a hydrophobic drug molecule can be delivered in a biological system without needing to covalently modify the drug in order to solubilize it. Binding constants for host–guest complexes depend on the polarity of the solvent.
Hydration affects electronic and vibrational properties of biomolecules.
Importance of solvation in computer simulations
Due to the importance of the effects of solvation on the structure of macromolecules, early computer simulations which attempted to model their behaviors without including the effects of solvent (in vacuo) could yield poor results when compared with experimental data obtained in solution. Small molecules may also adopt more compact conformations when simulated in vacuo; this is due to favorable van der Waals interactions and intramolecular electrostatic interactions which would be dampened in the presence of a solvent.
As computer power increased, it became possible to try and incorporate the effects of solvation within a simulation and the simplest way to do this is to surround the molecule being simulated with a "skin" of solvent molecules, akin to simulating the molecule within a drop of solvent if the skin is sufficiently deep.
See also
Born equation
Saturated solution
Solubility equilibrium
Solvent models
Supersaturation
Water model
References
Further reading
(part A), (part B), (Chemistry).
One example of a solvated MOF, where partial dissolution is described.
External links
Solutions
Chemical processes | 0.791506 | 0.991599 | 0.784856 |
Medicinal chemistry | Medicinal or pharmaceutical chemistry is a scientific discipline at the intersection of chemistry and pharmacy involved with designing and developing pharmaceutical drugs. Medicinal chemistry involves the identification, synthesis and development of new chemical entities suitable for therapeutic use. It also includes the study of existing drugs, their biological properties, and their quantitative structure-activity relationships (QSAR).
Medicinal chemistry is a highly interdisciplinary science combining organic chemistry with biochemistry, computational chemistry, pharmacology, molecular biology, statistics, and physical chemistry.
Compounds used as medicines are most often organic compounds, which are often divided into the broad classes of small organic molecules (e.g., atorvastatin, fluticasone, clopidogrel) and "biologics" (infliximab, erythropoietin, insulin glargine), the latter of which are most often medicinal preparations of proteins (natural and recombinant antibodies, hormones etc.). Medicines can also be inorganic and organometallic compounds, commonly referred to as metallodrugs (e.g., platinum, lithium and gallium-based agents such as cisplatin, lithium carbonate and gallium nitrate, respectively). The discipline of Medicinal Inorganic Chemistry investigates the role of metals in medicine (metallotherapeutics), which involves the study and treatment of diseases and health conditions associated with inorganic metals in biological systems. There are several metallotherapeutics approved for the treatment of cancer (e.g., contain Pt, Ru, Gd, Ti, Ge, V, and Ga), antimicrobials (e.g., Ag, Cu, and Ru), diabetes (e.g., V and Cr), broad-spectrum antibiotic (e.g., Bi), bipolar disorder (e.g., Li). Other areas of study include: metallomics, genomics, proteomics, diagnostic agents (e.g., MRI: Gd, Mn; X-ray: Ba, I) and radiopharmaceuticals (e.g., 99mTc for diagnostics, 186Re for therapeutics).
In particular, medicinal chemistry in its most common practice—focusing on small organic molecules—encompasses synthetic organic chemistry and aspects of natural products and computational chemistry in close combination with chemical biology, enzymology and structural biology, together aiming at the discovery and development of new therapeutic agents. Practically speaking, it involves chemical aspects of identification, and then systematic, thorough synthetic alteration of new chemical entities to make them suitable for therapeutic use. It includes synthetic and computational aspects of the study of existing drugs and agents in development in relation to their bioactivities (biological activities and properties), i.e., understanding their structure–activity relationships (SAR). Pharmaceutical chemistry is focused on quality aspects of medicines and aims to assure fitness for purpose of medicinal products.
At the biological interface, medicinal chemistry combines to form a set of highly interdisciplinary sciences, setting its organic, physical, and computational emphases alongside biological areas such as biochemistry, molecular biology, pharmacognosy and pharmacology, toxicology and veterinary and human medicine; these, with project management, statistics, and pharmaceutical business practices, systematically oversee altering identified chemical agents such that after pharmaceutical formulation, they are safe and efficacious, and therefore suitable for use in treatment of disease.
In the path of drug discovery
Discovery
Discovery is the identification of novel active chemical compounds, often called "hits", which are typically found by assay of compounds for a desired biological activity. Initial hits can come from repurposing existing agents toward a new pathologic processes, and from observations of biologic effects of new or existing natural products from bacteria, fungi, plants, etc. In addition, hits also routinely originate from structural observations of small molecule "fragments" bound to therapeutic targets (enzymes, receptors, etc.), where the fragments serve as starting points to develop more chemically complex forms by synthesis. Finally, hits also regularly originate from en-masse testing of chemical compounds against biological targets using biochemical or chemoproteomics assays, where the compounds may be from novel synthetic chemical libraries known to have particular properties (kinase inhibitory activity, diversity or drug-likeness, etc.), or from historic chemical compound collections or libraries created through combinatorial chemistry. While a number of approaches toward the identification and development of hits exist, the most successful techniques are based on chemical and biological intuition developed in team environments through years of rigorous practice aimed solely at discovering new therapeutic agents.
Hit to lead and lead optimization
Further chemistry and analysis is necessary, first to identify the "triage" compounds that do not provide series displaying suitable SAR and chemical characteristics associated with long-term potential for development, then to improve the remaining hit series concerning the desired primary activity, as well as secondary activities and physiochemical properties such that the agent will be useful when administered in real patients. In this regard, chemical modifications can improve the recognition and binding geometries (pharmacophores) of the candidate compounds, and so their affinities for their targets, as well as improving the physicochemical properties of the molecule that underlie necessary pharmacokinetic/pharmacodynamic (PK/PD), and toxicologic profiles (stability toward metabolic degradation, lack of geno-, hepatic, and cardiac toxicities, etc.) such that the chemical compound or biologic is suitable for introduction into animal and human studies.
Process chemistry and development
The final synthetic chemistry stages involve the production of a lead compound in suitable quantity and quality to allow large scale animal testing, and then human clinical trials. This involves the optimization of the synthetic route for bulk industrial production, and discovery of the most suitable drug formulation. The former of these is still the bailiwick of medicinal chemistry, the latter brings in the specialization of formulation science (with its components of physical and polymer chemistry and materials science). The synthetic chemistry specialization in medicinal chemistry aimed at adaptation and optimization of the synthetic route for industrial scale syntheses of hundreds of kilograms or more is termed process synthesis, and involves thorough knowledge of acceptable synthetic practice in the context of large scale reactions (reaction thermodynamics, economics, safety, etc.). Critical at this stage is the transition to more stringent GMP requirements for material sourcing, handling, and chemistry.
Synthetic analysis
The synthetic methodology employed in medicinal chemistry is subject to constraints that do not apply to traditional organic synthesis. Owing to the prospect of scaling the preparation, safety is of paramount importance. The potential toxicity of reagents affects methodology.
Structural analysis
The structures of pharmaceuticals are assessed in many ways, in part as a means to predict efficacy, stability, and accessibility. Lipinski's rule of five focus on the number of hydrogen bond donors and acceptors, number of rotatable bonds, surface area, and lipophilicity. Other parameters by which medicinal chemists assess or classify their compounds are: synthetic complexity, chirality, flatness, and aromatic ring count.
Structural analysis of lead compounds is often performed through computational methods prior to actual synthesis of the ligand(s). This is done for a number of reasons, including but not limited to: time and financial considerations (expenditure, etc.). Once the ligand of interest has been synthesized in the laboratory, analysis is then performed by traditional methods (TLC, NMR, GC/MS, and others).
Training
Medicinal chemistry is by nature an interdisciplinary science, and practitioners have a strong background in organic chemistry, which must eventually be coupled with a broad understanding of biological concepts related to cellular drug targets. Scientists in medicinal chemistry work are principally industrial scientists (but see following), working as part of an interdisciplinary team that uses their chemistry abilities, especially, their synthetic abilities, to use chemical principles to design effective therapeutic agents. The length of training is intense, with practitioners often required to attain a 4-year bachelor's degree followed by a 4–6 year Ph.D. in organic chemistry. Most training regimens also include a postdoctoral fellowship period of 2 or more years after receiving a Ph.D. in chemistry, making the total length of training range from 10 to 12 years of college education. However, employment opportunities at the Master's level also exist in the pharmaceutical industry, and at that and the Ph.D. level there are further opportunities for employment in academia and government.
Graduate level programs in medicinal chemistry can be found in traditional medicinal chemistry or pharmaceutical sciences departments, both of which are traditionally associated with schools of pharmacy, and in some chemistry departments. However, the majority of working medicinal chemists have graduate degrees (MS, but especially Ph.D.) in organic chemistry, rather than medicinal chemistry, and the preponderance of positions are in research, where the net is necessarily cast widest, and most broad synthetic activity occurs.
In research of small molecule therapeutics, an emphasis on training that provides for breadth of synthetic experience and "pace" of bench operations is clearly present (e.g., for individuals with pure synthetic organic and natural products synthesis in Ph.D. and post-doctoral positions, ibid.). In the medicinal chemistry specialty areas associated with the design and synthesis of chemical libraries or the execution of process chemistry aimed at viable commercial syntheses (areas generally with fewer opportunities), training paths are often much more varied (e.g., including focused training in physical organic chemistry, library-related syntheses, etc.).
As such, most entry-level workers in medicinal chemistry, especially in the U.S., do not have formal training in medicinal chemistry but receive the necessary medicinal chemistry and pharmacologic background after employment—at entry into their work in a pharmaceutical company, where the company provides its particular understanding or model of "medichem" training through active involvement in practical synthesis on therapeutic projects. (The same is somewhat true of computational medicinal chemistry specialties, but not to the same degree as in synthetic areas.)
See also
Bioisostere
Biological machines
Chemoproteomics
Drug design
Pharmacognosy
Pharmacokinetics
Pharmacology
Pharmacophore
Xenobiotic metabolism
References
Cheminformatics | 0.789503 | 0.993934 | 0.784714 |
Modeling and simulation | Modeling and simulation (M&S) is the use of models (e.g., physical, mathematical, behavioral, or logical representation of a system, entity, phenomenon, or process) as a basis for simulations to develop data utilized for managerial or technical decision making.
In the computer application of modeling and simulation a computer is used to build a mathematical model which contains key parameters of the physical model. The mathematical model represents the physical model in virtual form, and conditions are applied that set up the experiment of interest. The simulation starts – i.e., the computer calculates the results of those conditions on the mathematical model – and outputs results in a format that is either machine- or human-readable, depending upon the implementation.
The use of M&S within engineering is well recognized. Simulation technology belongs to the tool set of engineers of all application domains and has been included in the body of knowledge of engineering management. M&S helps to reduce costs, increase the quality of products and systems, and document and archive lessons learned. Because the results of a simulation are only as good as the underlying model(s), engineers, operators, and analysts must pay particular attention to its construction. To ensure that the results of the simulation are applicable to the real world, the user must understand the assumptions, conceptualizations, and constraints of its implementation. Additionally, models may be updated and improved using results of actual experiments. M&S is a discipline on its own. Its many application domains often lead to the assumption that M&S is a pure application. This is not the case and needs to be recognized by engineering management in the application of M&S.
The use of such mathematical models and simulations avoids actual experimentation, which can be costly and time-consuming. Instead, mathematical knowledge and computational power is used to solve real-world problems cheaply and in a time efficient manner. As such, M&S can facilitate understanding a system's behavior without actually testing the system in the real world. For example, to determine which type of spoiler would improve traction the most while designing a race car, a computer simulation of the car could be used to estimate the effect of different spoiler shapes on the coefficient of friction in a turn. Useful insights about different decisions in the design could be gleaned without actually building the car. In addition, simulation can support experimentation that occurs totally in software, or in human-in-the-loop environments where simulation represents systems or generates data needed to meet experiment objectives. Furthermore, simulation can be used to train persons using a virtual environment that would otherwise be difficult or expensive to produce.
Interest in simulations
Technically, simulation is well accepted. The 2006 National Science Foundation (NSF) Report on "Simulation-based Engineering Science" showed the potential of using simulation technology and methods to revolutionize the engineering science. Among the reasons for the steadily increasing interest in simulation applications are the following:
Using simulations is generally cheaper, safer and sometimes more ethical than conducting real-world experiments. For example, supercomputers are sometimes used to simulate the detonation of nuclear devices and their effects in order to support better preparedness in the event of a nuclear explosion. Similar efforts are conducted to simulate hurricanes and other natural catastrophes.
Simulations can often be even more realistic than traditional experiments, as they allow the free configuration of the realistic range of environment parameters found in the operational application field of the final product. Examples are supporting deep water operation of the US Navy or the simulating the surface of neighbored planets in preparation of NASA missions.
Simulations can often be conducted faster than real time. This allows using them for efficient if-then-else analyses of different alternatives, in particular when the necessary data to initialize the simulation can easily be obtained from operational data. This use of simulation adds decision support simulation systems to the tool box of traditional decision support systems.
Simulations allow setting up a coherent synthetic environment that allows for integration of simulated systems in the early analysis phase via mixed virtual systems with first prototypical components to a virtual test environment for the final system. If managed correctly, the environment can be migrated from the development and test domain to the training and education domain in follow-on life cycle phases for the systems (including the option to train and optimize a virtual twin of the real system under realistic constraints even before first components are being built).
The military and defense domain, in particular within the United States, has been the main M&S champion, in form of funding as well as application of M&S. E.g., M&S in modern military organizations is part of the acquisition/procurement strategy. Specifically, M&S is used to conduct Events and Experiments that influence requirements and training for military systems. As such, M&S is considered an integral part of systems engineering of military systems. Other application domains, however, are currently catching up. M&S in the fields of medicine, transportation, and other industries is poised to rapidly outstrip DoD's use of M&S in the years ahead, if it hasn't already happened.
Simulation in science
Modeling and simulation are important in research. Representing the real systems either via physical reproductions at smaller scale, or via mathematical models that allow representing the dynamics of the system via simulation, allows exploring system behavior in an articulated way which is often either not possible, or too risky in the real world.
As an emerging discipline
"The emerging discipline of M&S is based on developments in diverse computer science areas as well as influenced by developments in Systems Theory, Systems Engineering, Software Engineering, Artificial Intelligence, and more. This foundation is as diverse as that of engineering management and brings elements of art, engineering, and science together in a complex and unique way that requires domain experts to enable appropriate decisions when it comes to application or development of M&S technology in the context of this paper. The diversity and application-oriented nature of this new discipline sometimes result in the challenge, that the supported application domains themselves already have vocabularies in place that are not necessarily aligned between disjunctive domains. A comprehensive and concise representation of concepts, terms, and activities is needed that make up a professional Body of Knowledge for the M&S discipline. Due to the broad variety of contributors, this process is still ongoing."
Padilla et al. recommend in "Do we Need M&S Science" to distinguish between M&S Science, Engineering, and Applications.
M&S Science contributes to the Theory of M&S, defining the academic foundations of the discipline.
M&S Engineering is rooted in Theory but looks for applicable solution patterns. The focus is general methods that can be applied in various problem domains.
M&S Applications solve real world problems by focusing on solutions using M&S. Often, the solution results from applying a method, but many solutions are very problem domain specific and are derived from problem domain expertise and not from any general M&S theory or method.
Models can be composed of different units (models at finer granularity) linked to achieving a specific goal; for this reason they can be also called modeling solutions.
More generally, modeling and simulation is a key enabler for systems engineering activities as the system representation in a computer readable (and possibly executable) model enables engineers to reproduce the system (or Systems of System) behavior. A collection of applicative modeling and simulation method to support systems engineering activities in provided in.
Application domains
There are many categorizations possible, but the following taxonomy has been very successfully used in the defense domain, and is currently applied to medical simulation and transportation simulation as well.
Analyses Support is conducted in support of planning and experimentation. Very often, the search for an optimal solution that shall be implemented is driving these efforts. What-if analyses of alternatives fall into this category as well. This style of work is often accomplished by simulysts - those having skills in both simulation and as analysts. This blending of simulation and analyst is well noted in Kleijnen.
Systems Engineering Support is applied for the procurement, development, and testing of systems. This support can start in early phases and include topics like executable system architectures, and it can support testing by providing a virtual environment in which tests are conducted. This style of work is often accomplished by engineers and architects.
Training and Education Support provides simulators, virtual training environments, and serious games to train and educate people. This style of work is often accomplished by trainers working in concert with computer scientists.
A special use of Analyses Support is applied to ongoing business operations. Traditionally, decision support systems provide this functionality. Simulation systems improve their functionality by adding the dynamic element and allow to compute estimates and predictions, including optimization and what-if analyses.
Individual concepts
Although the terms "modeling" and "simulation" are often used as synonyms within disciplines applying M&S exclusively as a tool, within the discipline of M&S both are treated as individual and equally important concepts. Modeling is understood as the purposeful abstraction of reality, resulting in the formal specification of a conceptualization and underlying assumptions and constraints. M&S is in particular interested in models that are used to support the implementation of an executable version on a computer. The execution of a model over time is understood as the simulation. While modeling targets the conceptualization, simulation challenges mainly focus on implementation, in other words, modeling resides on the abstraction level, whereas simulation resides on the implementation level.
Conceptualization and implementation – modeling and simulation – are two activities that are mutually dependent, but can nonetheless be conducted by separate individuals. Management and engineering knowledge and guidelines are needed to ensure that they are well connected. Like an engineering management professional in systems engineering needs to make sure that the systems design captured in a systems architecture is aligned with the systems development, this task needs to be conducted with the same level of professionalism for the model that has to be implemented as well. As the role of big data and analytics continues to grow, the role of combined simulation of analysis is the realm of yet another professional called a simplest – in order to blend algorithmic and analytic techniques through visualizations available directly to decision makers. A study designed for the Bureau of Labor and Statistics by Lee et al. provides an interesting look at how bootstrap techniques (statistical analysis) were used with simulation to generate population data where there existed none.
Academic programs
Modeling and Simulation has only recently become an academic discipline of its own. Formerly, those working in the field usually had a background in engineering.
The following institutions offer degrees in Modeling and Simulation:
Ph D. Programs
University of Pennsylvania (Philadelphia, PA)
Old Dominion University (Norfolk, VA)
University of Alabama in Huntsville (Huntsville, AL)
University of Central Florida (Orlando, FL)
Naval Postgraduate School (Monterey, CA)
University of Genoa (Genoa, Italy)
Masters Programs
National University of Science and Technology, Pakistan (Islamabad, Pakistan)
Arizona State University (Tempe, AZ)
Old Dominion University (Norfolk, VA)
University of Central Florida (Orlando, FL)
the University of Alabama in Huntsville (Huntsville, AL)
Middle East Technical University (Ankara, Turkey)
University of New South Wales (Australia)
Naval Postgraduate School (Monterey, CA)
Department of Scientific Computing, Modeling and Simulation (M.Tech (Modelling & Simulation)) (Savitribai Phule Pune University, India)
Columbus State University (Columbus, GA)
Purdue University Calumet (Hammond, IN)
Delft University of Technology (Delft, The Netherlands)
University of Genoa (Genoa, Italy)
Hamburg University of Applied Sciences (Hamburg, Germany)
Professional Science Masters Programs
University of Central Florida (Orlando, FL)
Graduate Certificate Programs
Portland State University Systems Science
Columbus State University (Columbus, GA)
the University of Alabama in Huntsville (Huntsville, AL)
Undergraduate Programs
Old Dominion University (Norfolk, VA)
Ghulam Ishaq Khan Institute of Engineering Sciences and Technology (Swabi, Pakistan)
Modeling and Simulation Body of Knowledge
The Modeling and Simulation Body of Knowledge (M&S BoK) is the domain of knowledge (information) and capability (competency) that identifies the modeling and simulation community of practice and the M&S profession, industry, and market.
The M&S BoK Index is a set of pointers providing handles so that subject information content can be denoted, identified, accessed, and manipulated.
Summary
Three activities have to be conducted and orchestrated to ensure success:
a model must be produced that captures formally the conceptualization,
a simulation must implement this model, and
management must ensure that model and simulation are interconnected and on the current state (which means that normally the model needs to be updated in case the simulation is changed as well).
See also
Computational science
Computational engineering
Defense Technical Information Center
Glossary of military modeling and simulation
Interservice/Industry Training, Simulation and Education Conference (I/ITSEC)
Microscale and macroscale models
Military Operations Research Society (MORS)
Military simulation
Modeling and Simulation Coordination Office
Operations research
Orbit modeling
Power system simulation
Rule-based modeling
Simulation Interoperability Standards Organization (SISO)
Society for Modeling and Simulation International (SCS)
References
Further reading
The Springer Publishing House publishes the Simulation Foundations, Methods, and Applications Series .
Recently, Wiley started their own Series on Modeling and Simulation .
External links
US Department of Defense (DoD) Modeling and Simulation Coordination Office (M&SCO)
MODSIM World Conference
Society for Modeling and Simulation
Association for Computing Machinery (ACM) Special Interest Group (SIG) on SImulation and Modeling (SIM)
US Congressional Modeling and Simulation Caucus
Example of an M&S BoK Index developed by Tuncer Ören
SimSummit collaborative environment supporting an M&S BoK
Military terminology | 0.796023 | 0.985787 | 0.784709 |
IUPAC nomenclature of organic chemistry | In chemical nomenclature, the IUPAC nomenclature of organic chemistry is a method of naming organic chemical compounds as recommended by the International Union of Pure and Applied Chemistry (IUPAC). It is published in the Nomenclature of Organic Chemistry (informally called the Blue Book). Ideally, every possible organic compound should have a name from which an unambiguous structural formula can be created. There is also an IUPAC nomenclature of inorganic chemistry.
To avoid long and tedious names in normal communication, the official IUPAC naming recommendations are not always followed in practice, except when it is necessary to give an unambiguous and absolute definition to a compound. IUPAC names can sometimes be simpler than older names, as with ethanol, instead of ethyl alcohol. For relatively simple molecules they can be more easily understood than non-systematic names, which must be learnt or looked over. However, the common or trivial name is often substantially shorter and clearer, and so preferred. These non-systematic names are often derived from an original source of the compound. Also, very long names may be less clear than structural formulas.
Basic principles
In chemistry, a number of prefixes, suffixes and infixes are used to describe the type and position of the functional groups in the compound.
The steps for naming an organic compound are:
Identification of the most senior group. If more than one functional group, if any, is present, the one with highest group precedence should be used.
Identification of the ring or chain with the maximum number of senior groups.
Identification of the ring or chain with the most senior elements (In order: N, P, Si, B, O, S, C).
Identification of the parent compound. Rings are senior to chains if composed of the same elements.
For cyclic systems: Identification of the parent cyclic ring. The cyclic system must obey these rules, in order of precedence:
It should have the most senior heteroatom (in order: N, O, S, P, Si, B).
It should have the maximum number of rings.
It should have the maximum number of atoms.
It should have the maximum number of heteroatoms.
It should have the maximum number of senior heteroatoms (in order: O, S, N, P, Si, B).
For chains: Identification of the parent hydrocarbon chain. This chain must obey the following rules, in order of precedence:
It should have the maximum length.
It should have the maximum number of heteroatoms.
It should have the maximum number of senior heteroatoms (in order: O, S, N, P, Si, B).
For cyclic systems and chains after previous rules:
It should have the maximum number of multiple, then double bonds.
It should have the maximum number of substituents of the suffix functional group. By suffix, it is meant that the parent functional group should have a suffix, unlike halogen substituents. If more than one functional group is present, the one with highest group precedence should be used.
Identification of the side-chains. Side chains are the carbon chains that are not in the parent chain, but are branched off from it.
Identification of the remaining functional groups, if any, and naming them by their ionic prefixes (such as hydroxy for , oxy for , oxyalkane for , etc.).Different side-chains and functional groups will be grouped together in alphabetical order. (The multiplier prefixes di-, tri-, etc. are not taken into consideration for grouping alphabetically. For example, ethyl comes before dihydroxy or dimethyl, as the "e" in "ethyl" precedes the "h" in "dihydroxy" and the "m" in "dimethyl" alphabetically. The "di" is not considered in either case). When both side chains and secondary functional groups are present, they should be written mixed together in one group rather than in two separate groups.
Identification of double/triple bonds.
Numbering of the chain. This is done by first numbering the chain in both directions (left to right and right to left), and then choosing the numbering which follows these rules, in order of precedence. Not every rule will apply to every compound, rules can be skipped if they do not apply.
Has the lowest-numbered locant (or locants) for heteroatoms. Locants are the numbers on the carbons to which the substituent is directly attached.
Has the lowest-numbered locants for the indicated hydrogen. The indicated hydrogen is for some unsaturated heterocyclic compounds. It refers to the hydrogen atoms not attached to atoms with double bonds in the ring system.
Has the lowest-numbered locants for the suffix functional group.
Has the lowest-numbered locants for multiple bonds ('ene', 'yne'), and hydro prefixes. (The locant of a multiple bond is the number of the adjacent carbon with a lower number).
Has the lowest-numbered locants for all substituents cited by prefixes.
Has the lowest-numbered locants for substituents in order of citation (for example: in a cyclic ring with only bromine and chlorine functional groups, alphabetically bromo- is cited before chloro- and would receive the lower locant).
Numbering of the various substituents and bonds with their locants. If there is more than one of the same type of substituent/double bond, a prefix is added showing how many there are (di – 2, tri – 3, tetra – 4, then as for the number of carbons below with 'a' added at the end)
The numbers for that type of side chain will be grouped in ascending order and written before the name of the side-chain. If there are two side-chains with the same alpha carbon, the number will be written twice. Example: 2,2,3-trimethyl- . If there are both double bonds and triple bonds, "en" (double bond) is written before "yne" (triple bond). When the main functional group is a terminal functional group (a group which can exist only at the end of a chain, like formyl and carboxyl groups), there is no need to number it.
Arrangement in this form: Group of side chains and secondary functional groups with numbers made in step 6 + prefix of parent hydrocarbon chain (eth, meth) + double/triple bonds with numbers (or "ane") + primary functional group suffix with numbers.Wherever it says "with numbers", it is understood that between the word and the numbers, the prefix (di-, tri-) is used.
Adding of punctuation:
Commas are put between numbers (2 5 5 becomes 2,5,5)
Hyphens are put between a number and a letter (2 5 5 trimethylheptane becomes 2,5,5-trimethylheptane)
Successive words are merged into one word (trimethyl heptane becomes trimethylheptane) Note: IUPAC uses one-word names throughout. This is why all parts are connected.
The resulting name appears as:
#,#-di<side chain>-#-<secondary functional group>-#-<side chain>-#,#,#-tri<secondary functional group><parent chain prefix><If all bonds are single bonds, use "ane">-#,#-di<double bonds>-#-<triple bonds>-#-<primary functional group>
where each "#" represents a number. The group secondary functional groups and side chains may not look the same as shown here, as the side chains and secondary functional groups are arranged alphabetically. The di- and tri- have been used just to show their usage. (di- after #,#, tri- after #,#,#, etc.)
Example
Here is a sample molecule with the parent carbons numbered:
For simplicity, here is an image of the same molecule, where the hydrogens in the parent chain are removed and the carbons are shown by their numbers:
Now, following the above steps:
The parent hydrocarbon chain has 23 carbons. It is called tricosa-.
The functional groups with the highest precedence are the two ketone groups.
The groups are on carbon atoms 3 and 9. As there are two, we write 3,9-dione.
The numbering of the molecule is based on the ketone groups. When numbering from left to right, the ketone groups are numbered 3 and 9. When numbering from right to left, the ketone groups are numbered 15 and 21. 3 is less than 15, therefore the ketones are numbered 3 and 9. The smaller number is always used, not the sum of the constituents numbers.
The side chains are: an ethyl- at carbon 4, an ethyl- at carbon 8, and a butyl- at carbon 12. Note: the at carbon atom 15 is not a side chain, but it is a methoxy functional group.
There are two ethyl- groups. They are combined to create, 4,8-diethyl.
The side chains are grouped like this: 12-butyl-4,8-diethyl. (But this is not necessarily the final grouping, as functional groups may be added in between to ensure all groups are listed alphabetically.)
The secondary functional groups are: a hydroxy- at carbon 5, a chloro- at carbon 11, a methoxy- at carbon 15, and a bromo- at carbon 18. Grouped with the side chains, this gives 18-bromo-12-butyl-11-chloro-4,8-diethyl-5-hydroxy-15-methoxy.
There are two double bonds: one between carbons 6 and 7, and one between carbons 13 and 14. They would be called "6,13-diene", but the presence of alkynes switches it to 6,13-dien. There is one triple bond between carbon atoms 19 and 20. It will be called 19-yne.
The arrangement (with punctuation) is: 18-bromo-12-butyl-11-chloro-4,8-diethyl-5-hydroxy-15-methoxytricosa-6,13-dien-19-yne-3,9-dione
Finally, due to cis-trans isomerism, we have to specify the relative orientation of functional groups around each double bond. For this example, both double bonds are trans isomers, so we have (6E,13E)
The final name is (6E,13E)-18-bromo-12-butyl-11-chloro-4,8-diethyl-5-hydroxy-15-methoxytricosa-6,13-dien-19-yne-3,9-dione.
Hydrocarbons
Alkanes
Straight-chain alkanes take the suffix "-ane" and are prefixed depending on the number of carbon atoms in the chain, following standard rules. The first few are:
For example, the simplest alkane is methane, and the nine-carbon alkane is named nonane. The names of the first four alkanes were derived from methanol, ether, propionic acid and butyric acid, respectively. The rest are named with a Greek numeric prefix, with the exceptions of nonane which has a Latin prefix, and undecane which has mixed-language prefixes.
Cyclic alkanes are simply prefixed with "cyclo-": for example, is cyclobutane (not to be confused with butene) and is cyclohexane (not to be confused with hexene).
Branched alkanes are named as a straight-chain alkane with attached alkyl groups. They are prefixed with a number indicating the carbon the group is attached to, counting from the end of the alkane chain. For example, , commonly known as isobutane, is treated as a propane chain with a methyl group bonded to the middle (2) carbon, and given the systematic name 2-methylpropane. However, although the name 2-methylpropane could be used, it is easier and more logical to call it simply methylpropane – the methyl group could not possibly occur on any of the other carbon atoms (that would lengthen the chain and result in butane, not propane) and therefore the use of the number "2" is unnecessary.
If there is ambiguity in the position of the substituent, depending on which end of the alkane chain is counted as "1", then numbering is chosen so that the smaller number is used. For example, (isopentane) is named 2-methylbutane, not 3-methylbutane.
If there are multiple side-branches of the same size alkyl group, their positions are separated by commas and the group prefixed with multiplier prefixes depending on the number of branches. For example, (neopentane) is named 2,2-dimethylpropane. If there are different groups, they are added in alphabetical order, separated by commas or hyphens. The longest possible main alkane chain is used; therefore 3-ethyl-4-methylhexane instead of 2,3-diethylpentane, even though these describe equivalent structures. The di-, tri- etc. prefixes are ignored for the purpose of alphabetical ordering of side chains (e.g. 3-ethyl-2,4-dimethylpentane, not 2,4-dimethyl-3-ethylpentane).
Alkenes
Alkenes are named for their parent alkane chain with the suffix "-ene" and a numerical root indicating the position of the carbon with the lower number for each double bond in the chain: is but-1-ene.
Multiple double bonds take the form -diene, -triene, etc., with the size prefix of the chain taking an extra "a": is buta-1,3-diene. Simple cis and trans isomers may be indicated with a prefixed cis- or trans-: cis-but-2-ene, trans-but-2-ene. However, cis- and trans- are relative descriptors. It is IUPAC convention to describe all alkenes using absolute descriptors of Z- (same side) and E- (opposite) with the Cahn–Ingold–Prelog priority rules (see alse E–Z notation).
Alkynes
Alkynes are named using the same system, with the suffix "-yne" indicating a triple bond: ethyne (acetylene), propyne (methylacetylene).
Functional groups
Haloalkanes and haloarenes
In haloalkanes and haloarenes, Halogen functional groups are prefixed with the bonding position and take the form of fluoro-, chloro-, bromo-, iodo-, etc., depending on the halogen. Multiple groups are dichloro-, trichloro-, etc., and dissimilar groups are ordered alphabetically as before. For example, (chloroform) is trichloromethane. The anesthetic halothane is 2-bromo-2-chloro-1,1,1-trifluoroethane.
Alcohols
Alcohols take the suffix "-ol" with a numerical suffix indicating the bonding position: is propan-1-ol. The suffixes , , , etc., are used for multiple groups: Ethylene glycol is ethane-1,2-diol.
If higher precedence functional groups are present (see order of precedence, below), the prefix "hydroxy" is used with the bonding position: is 2-hydroxypropanoic acid.
Ethers
Ethers consist of an oxygen atom between the two attached carbon chains. The shorter of the two chains becomes the first part of the name with the -ane suffix changed to -oxy, and the longer alkane chain becomes the suffix of the name of the ether. Thus, is methoxymethane, and is methoxyethane (not ethoxymethane). If the oxygen is not attached to the end of the main alkane chain, then the whole shorter alkyl-plus-ether group is treated as a side-chain and prefixed with its bonding position on the main chain. Thus is 2-methoxypropane.
Alternatively, an ether chain can be named as an alkane in which one carbon is replaced by an oxygen, a replacement denoted by the prefix "oxa". For example, could also be called 2-oxabutane, and an epoxide could be called oxacyclopropane. This method is especially useful when both groups attached to the oxygen atom are complex.
Aldehydes
Aldehydes take the suffix "-al". If other functional groups are present, the chain is numbered such that the aldehyde carbon is in the "1" position, unless functional groups of higher precedence are present.
If a prefix form is required, "oxo-" is used (as for ketones), with the position number indicating the end of a chain: is 3-oxopropanoic acid. If the carbon in the carbonyl group cannot be included in the attached chain (for instance in the case of cyclic aldehydes), the prefix "formyl-" or the suffix "-carbaldehyde" is used: is cyclohexanecarbaldehyde. If an aldehyde is attached to a benzene and is the main functional group, the suffix becomes benzaldehyde.
Ketones
In general ketones take the suffix "-one" (pronounced own, not won) with a suffixed position number: is pentan-2-one. If a higher precedence suffix is in use, the prefix "oxo-" is used: is 3-oxohexanal.
Carboxylic acids
In general, carboxylic acids are named with the suffix -oic acid (etymologically a back-formation from benzoic acid). As with aldehydes, the carboxyl functional group must take the "1" position on the main chain and so the locant need not be stated. For example, (lactic acid) is named 2-hydroxypropanoic acid with no "1" stated. Some traditional names for common carboxylic acids (such as acetic acid) are in such widespread use that they are retained in IUPAC nomenclature, though systematic names like ethanoic acid are also used. Carboxylic acids attached to a benzene ring are structural analogs of benzoic acid and are named as one of its derivatives.
If there are multiple carboxyl groups on the same parent chain, multiplying prefixes are used: Malonic acid, , is systematically named propanedioic acid. Alternatively, the suffix can be used in place of "oic acid", combined with a multiplying prefix if necessary – mellitic acid is benzenehexacarboxylic acid, for example. In the latter case, the carbon atoms in the carboxyl groups do not count as being part of the main chain, a rule that also applies to the prefix form "carboxy-". Citric acid serves as an example: it is formally named rather than .
Carboxylates
Salts of carboxylic acids are named following the usual cation-then-anion conventions used for ionic compounds in both IUPAC and common nomenclature systems. The name of the carboxylate anion is derived from that of the parent acid by replacing the "–oic acid" ending with "–oate" or "carboxylate." For example, , the sodium salt of benzoic acid, is called sodium benzoate. Where an acid has both a systematic and a common name (like , for example, which is known as both acetic acid and as ethanoic acid), its salts can be named from either parent name. Thus, can be named as potassium acetate or as potassium ethanoate. The prefix form, is "carboxylato-".
Esters
Esters are named as alkyl derivatives of carboxylic acids. The alkyl (R') group is named first. The part is then named as a separate word based on the carboxylic acid name, with the ending changed from "-oic acid" to "-oate" or "-carboxylate" For example, is methyl pentanoate, and is ethyl 4-methylpentanoate. For esters such as ethyl acetate, ethyl formate or dimethyl phthalate that are based on common acids, IUPAC recommends use of these established names, called retained names. The "-oate" changes to "-ate." Some simple examples, named both ways, are shown in the figure above.
If the alkyl group is not attached at the end of the chain, the bond position to the ester group is suffixed before "-yl": may be called butan-2-yl propanoate or butan-2-yl propionate.. The prefix form is "oxycarbonyl-" with the (R') group preceding.
Acyl groups
Acyl groups are named by stripping the "-ic acid" of the corresponding carboxylic acid and replacing it with "-yl." For example, is called ethanoyl-R.
Acyl halides
Simply add the name of the attached halide to the end of the acyl group. For example, is ethanoyl chloride. An alternate suffix is "-carbonyl halide" as opposed to "-oyl halide". The prefix form is "halocarbonyl-".
Acid anhydrides
Acid anhydrides have two acyl groups linked by an oxygen atom. If both acyl groups are the same, then the name of the carboxylic acid with the word acid is replaced with the word anhydride and the IUPAC name consists of two words. If the acyl groups are different, then they are named in alphabetical order in the same way, with anhydride replacing acid and IUPAC name consists of three words. For example, is called ethanoic anhydride and is called ethanoic propanoic anhydride.
Amines
Amines are named for the attached alkane chain with the suffix "-amine" (e.g., methanamine). If necessary, the bonding position is suffixed: propan-1-amine, propan-2-amine. The prefix form is "amino-".
For secondary amines (of the form ), the longest carbon chain attached to the nitrogen atom becomes the primary name of the amine; the other chain is prefixed as an alkyl group with location prefix given as an italic N: is N-methylethanamine. Tertiary amines are treated similarly: is N-ethyl-N-methylpropanamine. Again, the substituent groups are ordered alphabetically.
Amides
Amides take the suffix "-amide", or "-carboxamide" if the carbon in the amide group cannot be included in the main chain. The prefix form is "carbamoyl-". e.g., methanamide, ethanamide.
Amides that have additional substituents on the nitrogen are treated similarly to the case of amines: they are ordered alphabetically with the location prefix N: is N,N-dimethylmethanamide, is N,N-dimethylethanamide.
Nitriles
Nitriles are named by adding the suffix "-nitrile" to the longest hydrocarbon chain (including the carbon of the cyano group). It can also be named by replacing the "-oic acid" of their corresponding carboxylic acids with "-carbonitrile." The prefix form is "cyano-." Functional class IUPAC nomenclature may also be used in the form of alkyl cyanides. For example, is called pentanenitrile or butyl cyanide.
Cyclic compounds
Cycloalkanes and aromatic compounds can be treated as the main parent chain of the compound, in which case the positions of substituents are numbered around the ring structure. For example, the three isomers of xylene , commonly the ortho-, meta-, and para- forms, are 1,2-dimethylbenzene, 1,3-dimethylbenzene, and 1,4-dimethylbenzene. The cyclic structures can also be treated as functional groups themselves, in which case they take the prefix "cycloalkyl-" (e.g. "cyclohexyl-") or for benzene, "phenyl-".
The IUPAC nomenclature scheme becomes rapidly more elaborate for more complex cyclic structures, with notation for compounds containing conjoined rings, and many common names such as phenol being accepted as base names for compounds derived from them.
Order of precedence of group
When compounds contain more than one functional group, the order of precedence determines which groups are named with prefix or suffix forms. The table below shows common groups in decreasing order of precedence. The highest-precedence group takes the suffix, with all others taking the prefix form. However, double and triple bonds only take suffix form (-en and -yn) and are used with other suffixes.
Prefixed substituents are ordered alphabetically (excluding any modifiers such as di-, tri-, etc.), e.g. chlorofluoromethane, not fluorochloromethane. If there are multiple functional groups of the same type, either prefixed or suffixed, the position numbers are ordered numerically (thus ethane-1,2-diol, not ethane-2,1-diol.) The N position indicator for amines and amides comes before "1", e.g., is N,2-dimethylpropanamine.
*Note: These suffixes, in which the carbon atom is counted as part of the preceding chain, are the most commonly used. See individual functional group articles for more details.
The order of remaining functional groups is only needed for substituted benzene and hence is not mentioned here.
Common nomenclature – trivial names
Common nomenclature uses the older names for some organic compounds instead of using the prefixes for the carbon skeleton above. The pattern can be seen below.
Ketones
Common names for ketones can be derived by naming the two alkyl or aryl groups bonded to the carbonyl group as separate words followed by the word ketone.
Acetone
Acetophenone
Benzophenone
Ethyl isopropyl ketone
Diethyl ketone
The first three of the names shown above are still considered to be acceptable IUPAC names.
Aldehydes
The common name for an aldehyde is derived from the common name of the corresponding carboxylic acid by dropping the word acid and changing the suffix from -ic or -oic to -aldehyde.
Formaldehyde
Acetaldehyde
Ions
The IUPAC nomenclature also provides rules for naming ions.
Hydron
Hydron is a generic term for hydrogen cation; protons, deuterons and tritons are all hydrons.
The hydrons are not found in heavier isotopes, however.
Parent hydride cations
Simple cations formed by adding a hydron to a hydride of a halogen, chalcogen or pnictogen are named by adding the suffix "-onium" to the element's root: is ammonium, is oxonium, and H2F+ is fluoronium. Ammonium was adopted instead of nitronium, which commonly refers to .
If the cationic center of the hydride is not a halogen, chalcogen or pnictogen then the suffix "-ium" is added to the name of the neutral hydride after dropping any final 'e'. is methanium, is dioxidanium (HO-OH is dioxidane), and is diazanium ( is diazane).
Cations and substitution
The above cations except for methanium are not, strictly speaking, organic, since they do not contain carbon. However, many organic cations are obtained by substituting another element or some functional group for a hydrogen.
The name of each substitution is prefixed to the hydride cation name. If many substitutions by the same functional group occur, then the number is indicated by prefixing with "di-", "tri-" as with halogenation. is trimethyloxonium. is trifluoromethylammonium.
See also
Descriptor (chemistry)
Hantzsch–Widman nomenclature
International Union of Biochemistry and Molecular Biology
Nucleic acid notation
Organic nomenclature in Chinese
Phanes
Preferred IUPAC name
Von Baeyer nomenclature
IUPAC nomenclature of inorganic chemistry
References
Bibliography
External links
IUPAC Nomenclature of Organic Chemistry (online version of several older editions of the IUPAC Blue Book)
IUPAC Recommendations on Organic & Biochemical Nomenclature, Symbols, Terminology, etc. (includes IUBMB Recommendations for biochemistry)
Bibliography of IUPAC Recommendations on Organic Nomenclature (last updated 11 April 2003)
ACD/Name Software for generating systematic nomenclature
ChemAxon Name <> Structure – ChemAxon IUPAC (& traditional) name to structure and structure to IUPAC name software. As used at chemicalize.org
chemicalize.org A free web site/service that extracts IUPAC names from web pages and annotates a 'chemicalized' version with structure images. Structures from annotated pages can also be searched.
American Chemical Society, Committee on Nomenclature, Terminology & Symbols
Chemical nomenclature
Encodings
Organic chemistry | 0.786756 | 0.997367 | 0.784685 |
Conceptual model | The term conceptual model refers to any model that is formed after a conceptualization or generalization process. Conceptual models are often abstractions of things in the real world, whether physical or social. Semantic studies are relevant to various stages of concept formation. Semantics is fundamentally a study of concepts, the meaning that thinking beings give to various elements of their experience.
Overview
Concept models and conceptual models
The value of a conceptual model is usually directly proportional to how well it corresponds to a past, present, future, actual or potential state of affairs. A concept model (a model of a concept) is quite different because in order to be a good model it need not have this real world correspondence. In artificial intelligence, conceptual models and conceptual graphs are used for building expert systems and knowledge-based systems; here the analysts are concerned to represent expert opinion on what is true not their own ideas on what is true.
Type and scope of conceptual models
Conceptual models range in type from the more concrete, such as the mental image of a familiar physical object, to the formal generality and abstractness of mathematical models which do not appear to the mind as an image. Conceptual models also range in terms of the scope of the subject matter that they are taken to represent. A model may, for instance, represent a single thing (e.g. the Statue of Liberty), whole classes of things (e.g. the electron), and even very vast domains of subject matter such as the physical universe. The variety and scope of conceptual models is due to the variety of purposes had by the people using them.
Conceptual modeling is the activity of formally describing some aspects of the physical and social world around us for the purposes of understanding and communication.
Fundamental objectives
A conceptual model's primary objective is to convey the fundamental principles and basic functionality of the system which it represents. Also, a conceptual model must be developed in such a way as to provide an easily understood system interpretation for the model's users. A conceptual model, when implemented properly, should satisfy four fundamental objectives.
Enhance an individual's understanding of the representative system
Facilitate efficient conveyance of system details between stakeholders
Provide a point of reference for system designers to extract system specifications
Document the system for future reference and provide a means for collaboration
The conceptual model plays an important role in the overall system development life cycle. Figure 1 below, depicts the role of the conceptual model in a typical system development scheme. It is clear that if the conceptual model is not fully developed, the execution of fundamental system properties may not be implemented properly, giving way to future problems or system shortfalls. These failures do occur in the industry and have been linked to; lack of user input, incomplete or unclear requirements, and changing requirements. Those weak links in the system design and development process can be traced to improper execution of the fundamental objectives of conceptual modeling. The importance of conceptual modeling is evident when such systemic failures are mitigated by thorough system development and adherence to proven development objectives/techniques.
Modelling techniques
Numerous techniques can be applied across multiple disciplines to increase the user's understanding of the system to be modeled. A few techniques are briefly described in the following text, however, many more exist or are being developed. Some commonly used conceptual modeling techniques and methods include: workflow modeling, workforce modeling, rapid application development, object-role modeling, and the Unified Modeling Language (UML).
Data flow modeling
Data flow modeling (DFM) is a basic conceptual modeling technique that graphically represents elements of a system. DFM is a fairly simple technique; however, like many conceptual modeling techniques, it is possible to construct higher and lower level representative diagrams. The data flow diagram usually does not convey complex system details such as parallel development considerations or timing information, but rather works to bring the major system functions into context. Data flow modeling is a central technique used in systems development that utilizes the structured systems analysis and design method (SSADM).
Entity relationship modeling
Entity–relationship modeling (ERM) is a conceptual modeling technique used primarily for software system representation. Entity-relationship diagrams, which are a product of executing the ERM technique, are normally used to represent database models and information systems. The main components of the diagram are the entities and relationships. The entities can represent independent functions, objects, or events. The relationships are responsible for relating the entities to one another. To form a system process, the relationships are combined with the entities and any attributes needed to further describe the process. Multiple diagramming conventions exist for this technique; IDEF1X, Bachman, and EXPRESS, to name a few. These conventions are just different ways of viewing and organizing the data to represent different system aspects.
Event-driven process chain
The event-driven process chain (EPC) is a conceptual modeling technique which is mainly used to systematically improve business process flows. Like most conceptual modeling techniques, the event driven process chain consists of entities/elements and functions that allow relationships to be developed and processed. More specifically, the EPC is made up of events which define what state a process is in or the rules by which it operates. In order to progress through events, a function/ active event must be executed. Depending on the process flow, the function has the ability to transform event states or link to other event driven process chains. Other elements exist within an EPC, all of which work together to define how and by what rules the system operates. The EPC technique can be applied to business practices such as resource planning, process improvement, and logistics.
Joint application development
The dynamic systems development method uses a specific process called JEFFF to conceptually model a systems life cycle. JEFFF is intended to focus more on the higher level development planning that precedes a project's initialization. The JAD process calls for a series of workshops in which the participants work to identify, define, and generally map a successful project from conception to completion. This method has been found to not work well for large scale applications, however smaller applications usually report some net gain in efficiency.
Place/transition net
Also known as Petri nets, this conceptual modeling technique allows a system to be constructed with elements that can be described by direct mathematical means. The petri net, because of its nondeterministic execution properties and well defined mathematical theory, is a useful technique for modeling concurrent system behavior, i.e. simultaneous process executions.
State transition modeling
State transition modeling makes use of state transition diagrams to describe system behavior. These state transition diagrams use distinct states to define system behavior and changes. Most current modeling tools contain some kind of ability to represent state transition modeling. The use of state transition models can be most easily recognized as logic state diagrams and directed graphs for finite-state machines.
Technique evaluation and selection
Because the conceptual modeling method can sometimes be purposefully vague to account for a broad area of use, the actual application of concept modeling can become difficult. To alleviate this issue, and shed some light on what to consider when selecting an appropriate conceptual modeling technique, the framework proposed by Gemino and Wand will be discussed in the following text. However, before evaluating the effectiveness of a conceptual modeling technique for a particular application, an important concept must be understood; Comparing conceptual models by way of specifically focusing on their graphical or top level representations is shortsighted. Gemino and Wand make a good point when arguing that the emphasis should be placed on a conceptual modeling language when choosing an appropriate technique. In general, a conceptual model is developed using some form of conceptual modeling technique. That technique will utilize a conceptual modeling language that determines the rules for how the model is arrived at. Understanding the capabilities of the specific language used is inherent to properly evaluating a conceptual modeling technique, as the language reflects the techniques descriptive ability. Also, the conceptual modeling language will directly influence the depth at which the system is capable of being represented, whether it be complex or simple.
Considering affecting factors
Building on some of their earlier work, Gemino and Wand acknowledge some main points to consider when studying the affecting factors: the content that the conceptual model must represent, the method in which the model will be presented, the characteristics of the model's users, and the conceptual model languages specific task. The conceptual model's content should be considered in order to select a technique that would allow relevant information to be presented. The presentation method for selection purposes would focus on the technique's ability to represent the model at the intended level of depth and detail. The characteristics of the model's users or participants is an important aspect to consider. A participant's background and experience should coincide with the conceptual model's complexity, else misrepresentation of the system or misunderstanding of key system concepts could lead to problems in that system's realization. The conceptual model language task will further allow an appropriate technique to be chosen. The difference between creating a system conceptual model to convey system functionality and creating a system conceptual model to interpret that functionality could involve two completely different types of conceptual modeling languages.
Considering affected variables
Gemino and Wand go on to expand the affected variable content of their proposed framework by considering the focus of observation and the criterion for comparison. The focus of observation considers whether the conceptual modeling technique will create a "new product", or whether the technique will only bring about a more intimate understanding of the system being modeled. The criterion for comparison would weigh the ability of the conceptual modeling technique to be efficient or effective. A conceptual modeling technique that allows for development of a system model which takes all system variables into account at a high level may make the process of understanding the system functionality more efficient, but the technique lacks the necessary information to explain the internal processes, rendering the model less effective.
When deciding which conceptual technique to use, the recommendations of Gemino and Wand can be applied in order to properly evaluate the scope of the conceptual model in question. Understanding the conceptual models scope will lead to a more informed selection of a technique that properly addresses that particular model. In summary, when deciding between modeling techniques, answering the following questions would allow one to address some important conceptual modeling considerations.
What content will the conceptual model represent?
How will the conceptual model be presented?
Who will be using or participating in the conceptual model?
How will the conceptual model describe the system?
What is the conceptual models focus of observation?
Will the conceptual model be efficient or effective in describing the system?
Another function of the simulation conceptual model is to provide a rational and factual basis for assessment of simulation application appropriateness.
Models in philosophy and science
Mental model
In cognitive psychology and philosophy of mind, a mental model is a representation of something in the mind, but a mental model may also refer to a nonphysical external model of the mind itself.
Metaphysical models
A metaphysical model is a type of conceptual model which is distinguished from other conceptual models by its proposed scope; a metaphysical model intends to represent reality in the broadest possible way. This is to say that it explains the answers to fundamental questions such as whether matter and mind are one or two substances; or whether or not humans have free will.
Conceptual model vs. semantics model
Conceptual Models and semantic models have many similarities, however the way they are presented, the level of flexibility and the use are different.
Conceptual models have a certain purpose in mind, hence the core semantic concepts are predefined in a so-called meta model. This enables a pragmatic modelling but reduces the flexibility, as only the predefined semantic concepts can be used. Samples are flow charts for process behaviour or organisational structure for tree behaviour.
Semantic models are more flexible and open, and therefore more difficult to model. Potentially any semantic concept can be defined, hence the modelling support is very generic. Samples are terminologies, taxonomies or ontologies.
In a concept model each concept has a unique and distinguishable graphical representation, whereas semantic concepts are by default the same.
In a concept model each concept has predefined properties that can be populated, whereas semantic concepts are related to concepts that are interpreted as properties.
In a concept model operational semantic can be built-in, like the processing of a sequence, whereas a semantic model needs explicit semantic definition of the sequence.
The decision if a concept model or a semantic model is used, depends therefore on the "object under survey", the intended goal, the necessary flexibility as well as how the model is interpreted. In case of human-interpretation there may be a focus on graphical concept models, in case of machine interpretation there may be the focus on semantic models.
Epistemological models
An epistemological model is a type of conceptual model whose proposed scope is the known and the knowable, and the believed and the believable.
Logical models
In logic, a model is a type of interpretation under which a particular statement is true. Logical models can be broadly divided into ones which only attempt to represent concepts, such as mathematical models; and ones which attempt to represent physical objects, and factual relationships, among which are scientific models.
Model theory is the study of (classes of) mathematical structures such as groups, fields, graphs, or even universes of set theory, using tools from mathematical logic. A system that gives meaning to the sentences of a formal language is called a model for the language. If a model for a language moreover satisfies a particular sentence or theory (set of sentences), it is called a model of the sentence or theory. Model theory has close ties to algebra and universal algebra.
Mathematical models
Mathematical models can take many forms, including but not limited to dynamical systems, statistical models, differential equations, or game theoretic models. These and other types of models can overlap, with a given model involving a variety of abstract structures.
A more comprehensive type of mathematical model uses a linguistic version of category theory to model a given situation. Akin to entity-relationship models, custom categories or sketches can be directly translated into database schemas. The difference is that logic is replaced by category theory, which brings powerful theorems to bear on the subject of modeling, especially useful for translating between disparate models (as functors between categories).
Scientific models
A scientific model is a simplified abstract view of a complex reality. A scientific model represents empirical objects, phenomena, and physical processes in a logical way. Attempts to formalize the principles of the empirical sciences use an interpretation to model reality, in the same way logicians axiomatize the principles of logic. The aim of these attempts is to construct a formal system that will not produce theoretical consequences that are contrary to what is found in reality. Predictions or other statements drawn from such a formal system mirror or map the real world only insofar as these scientific models are true.<ref name="tcarotmimanass">Leo Apostel (1961). "Formal study of models". In: The Concept and the Role of the Model in Mathematics and Natural and Social. Edited by Hans Freudenthal. Springer. pp. 8–9 (Source)],</ref>
Statistical models
A statistical model is a probability distribution function proposed as generating data. In a parametric model, the probability distribution function has variable parameters, such as the mean and variance in a normal distribution, or the coefficients for the various exponents of the independent variable in linear regression. A nonparametric model has a distribution function without parameters, such as in bootstrapping, and is only loosely confined by assumptions. Model selection is a statistical method for selecting a distribution function within a class of them; e.g., in linear regression where the dependent variable is a polynomial of the independent variable with parametric coefficients, model selection is selecting the highest exponent, and may be done with nonparametric means, such as with cross validation.
In statistics there can be models of mental events as well as models of physical events. For example, a statistical model of customer behavior is a model that is conceptual (because behavior is physical), but a statistical model of customer satisfaction is a model of a concept (because satisfaction is a mental not a physical event).
Social and political models
Economic models
In economics, a model is a theoretical construct that represents economic processes by a set of variables and a set of logical and/or quantitative relationships between them. The economic model is a simplified framework designed to illustrate complex processes, often but not always using mathematical techniques. Frequently, economic models use structural parameters. Structural parameters are underlying parameters in a model or class of models. A model may have various parameters and those parameters may change to create various properties.
Models in systems architecture
A system model is the conceptual model that describes and represents the structure, behavior, and more views of a system. A system model can represent multiple views of a system by using two different approaches. The first one is the non-architectural approach and the second one is the architectural approach. The non-architectural approach respectively picks a model for each view. The architectural approach, also known as system architecture, instead of picking many heterogeneous and unrelated models, will use only one integrated architectural model.
Business process modelling
In business process modelling the enterprise process model is often referred to as the business process model. Process models are core concepts in the discipline of process engineering. Process models are:
Processes of the same nature that are classified together into a model.
A description of a process at the type level.
Since the process model is at the type level, a process is an instantiation of it.
The same process model is used repeatedly for the development of many applications and thus, has many instantiations.
One possible use of a process model is to prescribe how things must/should/could be done in contrast to the process itself which is really what happens. A process model is roughly an anticipation of what the process will look like. What the process shall be will be determined during actual system development.
Models in information system design
Conceptual models of human activity systems
Conceptual models of human activity systems are used in soft systems methodology (SSM), which is a method of systems analysis concerned with the structuring of problems in management. These models are models of concepts; the authors specifically state that they are not intended to represent a state of affairs in the physical world. They are also used in information requirements analysis (IRA) which is a variant of SSM developed for information system design and software engineering.
Logico-linguistic models
Logico-linguistic modeling is another variant of SSM that uses conceptual models. However, this method combines models of concepts with models of putative real world objects and events. It is a graphical representation of modal logic in which modal operators are used to distinguish statement about concepts from statements about real world objects and events.
Data models
Entity–relationship model
In software engineering, an entity–relationship model (ERM) is an abstract and conceptual representation of data. Entity–relationship modeling is a database modeling method, used to produce a type of conceptual schema or semantic data model of a system, often a relational database, and its requirements in a top-down fashion. Diagrams created by this process are called entity-relationship diagrams, ER diagrams, or ERDs.
Entity–relationship models have had wide application in the building of information systems intended to support activities involving objects and events in the real world. In these cases they are models that are conceptual. However, this modeling method can be used to build computer games or a family tree of the Greek Gods, in these cases it would be used to model concepts.
Domain model
A domain model is a type of conceptual model used to depict the structural elements and their conceptual constraints within a domain of interest (sometimes called the problem domain). A domain model includes the various entities, their attributes and relationships, plus the constraints governing the conceptual integrity of the structural model elements comprising that problem domain. A domain model may also include a number of conceptual views, where each view is pertinent to a particular subject area of the domain or to a particular subset of the domain model which is of interest to a stakeholder of the domain model.
Like entity–relationship models, domain models can be used to model concepts or to model real world objects and events.
See also
Concept
Concept mapping
Conceptual framework
Conceptual model (computer science)
Conceptual schema
Conceptual system
Digital twin
Information model
International Conference on Conceptual Modeling
Interpretation (logic)
Isolated system
Ontology (computer science)
Paradigm
Physical model
Process of concept formation
Scientific modeling
Simulation
Theory
References
Further reading
J. Parsons, L. Cole (2005), "What do the pictures mean? Guidelines for experimental evaluation of representation fidelity in diagrammatical conceptual modeling techniques", Data & Knowledge Engineering 55: 327–342;
A. Gemino, Y. Wand (2005), "Complexity and clarity in conceptual modeling: Comparison of mandatory and optional properties", Data & Knowledge Engineering 55: 301–326;
D. Batra (2005), "Conceptual Data Modeling Patterns", Journal of Database Management 16: 84–106
Papadimitriou, Fivos. (2010). "Conceptual Modelling of Landscape Complexity". Landscape Research, 35(5):563-570.
External links
Models article in the Internet Encyclopedia of Philosophy''
Metaphor
Semantics
Simulation | 0.788948 | 0.994567 | 0.784662 |
Chemical formula | A chemical formula is a way of presenting information about the chemical proportions of atoms that constitute a particular chemical compound or molecule, using chemical element symbols, numbers, and sometimes also other symbols, such as parentheses, dashes, brackets, commas and plus (+) and minus (−) signs. These are limited to a single typographic line of symbols, which may include subscripts and superscripts. A chemical formula is not a chemical name since it does not contain any words. Although a chemical formula may imply certain simple chemical structures, it is not the same as a full chemical structural formula. Chemical formulae can fully specify the structure of only the simplest of molecules and chemical substances, and are generally more limited in power than chemical names and structural formulae.
The simplest types of chemical formulae are called empirical formulae, which use letters and numbers indicating the numerical proportions of atoms of each type. Molecular formulae indicate the simple numbers of each type of atom in a molecule, with no information on structure. For example, the empirical formula for glucose is (twice as many hydrogen atoms as carbon and oxygen), while its molecular formula is (12 hydrogen atoms, six carbon and oxygen atoms).
Sometimes a chemical formula is complicated by being written as a condensed formula (or condensed molecular formula, occasionally called a "semi-structural formula"), which conveys additional information about the particular ways in which the atoms are chemically bonded together, either in covalent bonds, ionic bonds, or various combinations of these types. This is possible if the relevant bonding is easy to show in one dimension. An example is the condensed molecular/chemical formula for ethanol, which is or . However, even a condensed chemical formula is necessarily limited in its ability to show complex bonding relationships between atoms, especially atoms that have bonds to four or more different substituents.
Since a chemical formula must be expressed as a single line of chemical element symbols, it often cannot be as informative as a true structural formula, which is a graphical representation of the spatial relationship between atoms in chemical compounds (see for example the figure for butane structural and chemical formulae, at right). For reasons of structural complexity, a single condensed chemical formula (or semi-structural formula) may correspond to different molecules, known as isomers. For example, glucose shares its molecular formula with a number of other sugars, including fructose, galactose and mannose. Linear equivalent chemical names exist that can and do specify uniquely any complex structural formula (see chemical nomenclature), but such names must use many terms (words), rather than the simple element symbols, numbers, and simple typographical symbols that define a chemical formula.
Chemical formulae may be used in chemical equations to describe chemical reactions and other chemical transformations, such as the dissolving of ionic compounds into solution. While, as noted, chemical formulae do not have the full power of structural formulae to show chemical relationships between atoms, they are sufficient to keep track of numbers of atoms and numbers of electrical charges in chemical reactions, thus balancing chemical equations so that these equations can be used in chemical problems involving conservation of atoms, and conservation of electric charge.
Overview
A chemical formula identifies each constituent element by its chemical symbol and indicates the proportionate number of atoms of each element. In empirical formulae, these proportions begin with a key element and then assign numbers of atoms of the other elements in the compound, by ratios to the key element. For molecular compounds, these ratio numbers can all be expressed as whole numbers. For example, the empirical formula of ethanol may be written because the molecules of ethanol all contain two carbon atoms, six hydrogen atoms, and one oxygen atom. Some types of ionic compounds, however, cannot be written with entirely whole-number empirical formulae. An example is boron carbide, whose formula of is a variable non-whole number ratio with n ranging from over 4 to more than 6.5.
When the chemical compound of the formula consists of simple molecules, chemical formulae often employ ways to suggest the structure of the molecule. These types of formulae are variously known as molecular formulae and condensed formulae. A molecular formula enumerates the number of atoms to reflect those in the molecule, so that the molecular formula for glucose is rather than the glucose empirical formula, which is . However, except for very simple substances, molecular chemical formulae lack needed structural information, and are ambiguous.
For simple molecules, a condensed (or semi-structural) formula is a type of chemical formula that may fully imply a correct structural formula. For example, ethanol may be represented by the condensed chemical formula , and dimethyl ether by the condensed formula . These two molecules have the same empirical and molecular formulae, but may be differentiated by the condensed formulae shown, which are sufficient to represent the full structure of these simple organic compounds.
Condensed chemical formulae may also be used to represent ionic compounds that do not exist as discrete molecules, but nonetheless do contain covalently bound clusters within them. These polyatomic ions are groups of atoms that are covalently bound together and have an overall ionic charge, such as the sulfate ion. Each polyatomic ion in a compound is written individually in order to illustrate the separate groupings. For example, the compound dichlorine hexoxide has an empirical formula , and molecular formula , but in liquid or solid forms, this compound is more correctly shown by an ionic condensed formula , which illustrates that this compound consists of ions and ions. In such cases, the condensed formula only need be complex enough to show at least one of each ionic species.
Chemical formulae as described here are distinct from the far more complex chemical systematic names that are used in various systems of chemical nomenclature. For example, one systematic name for glucose is (2R,3S,4R,5R)-2,3,4,5,6-pentahydroxyhexanal. This name, interpreted by the rules behind it, fully specifies glucose's structural formula, but the name is not a chemical formula as usually understood, and uses terms and words not used in chemical formulae. Such names, unlike basic formulae, may be able to represent full structural formulae without graphs.
Types
Empirical formula
In chemistry, the empirical formula of a chemical is a simple expression of the relative number of each type of atom or ratio of the elements in the compound. Empirical formulae are the standard for ionic compounds, such as , and for macromolecules, such as . An empirical formula makes no reference to isomerism, structure, or absolute number of atoms. The term empirical refers to the process of elemental analysis, a technique of analytical chemistry used to determine the relative percent composition of a pure chemical substance by element.
For example, hexane has a molecular formula of , and (for one of its isomers, n-hexane) a structural formula , implying that it has a chain structure of 6 carbon atoms, and 14 hydrogen atoms. However, the empirical formula for hexane is . Likewise the empirical formula for hydrogen peroxide, , is simply , expressing the 1:1 ratio of component elements. Formaldehyde and acetic acid have the same empirical formula, . This is also the molecular formula for formaldehyde, but acetic acid has double the number of atoms.
Like the other formula types detailed below, an empirical formula shows the number of elements in a molecule, and determines whether it is a binary compound, ternary compound, quaternary compound, or has even more elements.
Molecular formula
Molecular formulae simply indicate the numbers of each type of atom in a molecule of a molecular substance. They are the same as empirical formulae for molecules that only have one atom of a particular type, but otherwise may have larger numbers. An example of the difference is the empirical formula for glucose, which is (ratio 1:2:1), while its molecular formula is (number of atoms 6:12:6). For water, both formulae are . A molecular formula provides more information about a molecule than its empirical formula, but is more difficult to establish.
Structural formula
In addition to indicating the number of atoms of each elementa molecule, a structural formula indicates how the atoms are organized, and shows (or implies) the chemical bonds between the atoms. There are multiple types of structural formulas focused on different aspects of the molecular structure.
The two diagrams show two molecules which are structural isomers of each other, since they both have the same molecular formula , but they have different structural formulas as shown.
Condensed formula
The connectivity of a molecule often has a strong influence on its physical and chemical properties and behavior. Two molecules composed of the same numbers of the same types of atoms (i.e. a pair of isomers) might have completely different chemical and/or physical properties if the atoms are connected differently or in different positions. In such cases, a structural formula is useful, as it illustrates which atoms are bonded to which other ones. From the connectivity, it is often possible to deduce the approximate shape of the molecule.
A condensed (or semi-structural) formula may represent the types and spatial arrangement of bonds in a simple chemical substance, though it does not necessarily specify isomers or complex structures. For example, ethane consists of two carbon atoms single-bonded to each other, with each carbon atom having three hydrogen atoms bonded to it. Its chemical formula can be rendered as . In ethylene there is a double bond between the carbon atoms (and thus each carbon only has two hydrogens), therefore the chemical formula may be written: , and the fact that there is a double bond between the carbons is implicit because carbon has a valence of four. However, a more explicit method is to write or less commonly . The two lines (or two pairs of dots) indicate that a double bond connects the atoms on either side of them.
A triple bond may be expressed with three lines or three pairs of dots, and if there may be ambiguity, a single line or pair of dots may be used to indicate a single bond.
Molecules with multiple functional groups that are the same may be expressed by enclosing the repeated group in round brackets. For example, isobutane may be written . This condensed structural formula implies a different connectivity from other molecules that can be formed using the same atoms in the same proportions (isomers). The formula implies a central carbon atom connected to one hydrogen atom and three methyl groups. The same number of atoms of each element (10 hydrogens and 4 carbons, or ) may be used to make a straight chain molecule, n-butane: .
Chemical names in answer to limitations of chemical formulae
The alkene called but-2-ene has two isomers, which the chemical formula does not identify. The relative position of the two methyl groups must be indicated by additional notation denoting whether the methyl groups are on the same side of the double bond (cis or Z) or on the opposite sides from each other (trans or E).
As noted above, in order to represent the full structural formulae of many complex organic and inorganic compounds, chemical nomenclature may be needed which goes well beyond the available resources used above in simple condensed formulae. See IUPAC nomenclature of organic chemistry and IUPAC nomenclature of inorganic chemistry 2005 for examples. In addition, linear naming systems such as International Chemical Identifier (InChI) allow a computer to construct a structural formula, and simplified molecular-input line-entry system (SMILES) allows a more human-readable ASCII input. However, all these nomenclature systems go beyond the standards of chemical formulae, and technically are chemical naming systems, not formula systems.
Polymers in condensed formulae
For polymers in condensed chemical formulae, parentheses are placed around the repeating unit. For example, a hydrocarbon molecule that is described as , is a molecule with fifty repeating units. If the number of repeating units is unknown or variable, the letter n may be used to indicate this formula: .
Ions in condensed formulae
For ions, the charge on a particular atom may be denoted with a right-hand superscript. For example, , or . The total charge on a charged molecule or a polyatomic ion may also be shown in this way, such as for hydronium, , or sulfate, . Here + and − are used in place of +1 and −1, respectively.
For more complex ions, brackets [ ] are often used to enclose the ionic formula, as in , which is found in compounds such as caesium dodecaborate, . Parentheses can be nested inside brackets to indicate a repeating unit, as in Hexamminecobalt(III) chloride, . Here, indicates that the ion contains six ammine groups bonded to cobalt, and [ ] encloses the entire formula of the ion with charge +3.
This is strictly optional; a chemical formula is valid with or without ionization information, and Hexamminecobalt(III) chloride may be written as or . Brackets, like parentheses, behave in chemistry as they do in mathematics, grouping terms togetherthey are not specifically employed only for ionization states. In the latter case here, the parentheses indicate 6 groups all of the same shape, bonded to another group of size 1 (the cobalt atom), and then the entire bundle, as a group, is bonded to 3 chlorine atoms. In the former case, it is clearer that the bond connecting the chlorines is ionic, rather than covalent.
Isotopes
Although isotopes are more relevant to nuclear chemistry or stable isotope chemistry than to conventional chemistry, different isotopes may be indicated with a prefixed superscript in a chemical formula. For example, the phosphate ion containing radioactive phosphorus-32 is . Also a study involving stable isotope ratios might include the molecule .
A left-hand subscript is sometimes used redundantly to indicate the atomic number. For example, for dioxygen, and for the most abundant isotopic species of dioxygen. This is convenient when writing equations for nuclear reactions, in order to show the balance of charge more clearly.
Trapped atoms
The @ symbol (at sign) indicates an atom or molecule trapped inside a cage but not chemically bound to it. For example, a buckminsterfullerene with an atom (M) would simply be represented as regardless of whether M was inside the fullerene without chemical bonding or outside, bound to one of the carbon atoms. Using the @ symbol, this would be denoted if M was inside the carbon network. A non-fullerene example is , an ion in which one arsenic (As) atom is trapped in a cage formed by the other 32 atoms.
This notation was proposed in 1991 with the discovery of fullerene cages (endohedral fullerenes), which can trap atoms such as La to form, for example, or . The choice of the symbol has been explained by the authors as being concise, readily printed and transmitted electronically (the at sign is included in ASCII, which most modern character encoding schemes are based on), and the visual aspects suggesting the structure of an endohedral fullerene.
Non-stoichiometric chemical formulae
Chemical formulae most often use integers for each element. However, there is a class of compounds, called non-stoichiometric compounds, that cannot be represented by small integers. Such a formula might be written using decimal fractions, as in , or it might include a variable part represented by a letter, as in , where x is normally much less than 1.
General forms for organic compounds
A chemical formula used for a series of compounds that differ from each other by a constant unit is called a general formula. It generates a homologous series of chemical formulae. For example, alcohols may be represented by the formula (n ≥ 1), giving the homologs methanol, ethanol, propanol for 1 ≤ n ≤ 3.
Hill system
The Hill system (or Hill notation) is a system of writing empirical chemical formulae, molecular chemical formulae and components of a condensed formula such that the number of carbon atoms in a molecule is indicated first, the number of hydrogen atoms next, and then the number of all other chemical elements subsequently, in alphabetical order of the chemical symbols. When the formula contains no carbon, all the elements, including hydrogen, are listed alphabetically.
By sorting formulae according to the number of atoms of each element present in the formula according to these rules, with differences in earlier elements or numbers being treated as more significant than differences in any later element or number—like sorting text strings into lexicographical order—it is possible to collate chemical formulae into what is known as Hill system order.
The Hill system was first published by Edwin A. Hill of the United States Patent and Trademark Office in 1900. It is the most commonly used system in chemical databases and printed indexes to sort lists of compounds.
A list of formulae in Hill system order is arranged alphabetically, as above, with single-letter elements coming before two-letter symbols when the symbols begin with the same letter (so "B" comes before "Be", which comes before "Br").
The following example formulae are written using the Hill system, and listed in Hill order:
BrClH2Si
BrI
CCl4
CH3I
C2H5Br
H2O4S
See also
Formula unit
Glossary of chemical formulae
Nuclear notation
Periodic table
Skeletal formula
Simplified molecular-input line-entry system
Notes
References
External links
Hill notation example, from the University of Massachusetts Lowell libraries, including how to sort into Hill system order
Molecular formula calculation applying Hill notation. The library calculating Hill notation is available on npm.
Chemical nomenclature
Notation | 0.78574 | 0.99851 | 0.78457 |
Cofactor (biochemistry) | A cofactor is a non-protein chemical compound or metallic ion that is required for an enzyme's role as a catalyst (a catalyst is a substance that increases the rate of a chemical reaction). Cofactors can be considered "helper molecules" that assist in biochemical transformations. The rates at which these happen are characterized in an area of study called enzyme kinetics. Cofactors typically differ from ligands in that they often derive their function by remaining bound.
Cofactors can be classified into two types: inorganic ions and complex organic molecules called coenzymes. Coenzymes are mostly derived from vitamins and other organic essential nutrients in small amounts. (Some scientists limit the use of the term "cofactor" for inorganic substances; both types are included here.)
Coenzymes are further divided into two types. The first is called a "prosthetic group", which consists of a coenzyme that is tightly (or even covalently) and permanently bound to a protein. The second type of coenzymes are called "cosubstrates", and are transiently bound to the protein. Cosubstrates may be released from a protein at some point, and then rebind later. Both prosthetic groups and cosubstrates have the same function, which is to facilitate the reaction of enzymes and proteins. An inactive enzyme without the cofactor is called an apoenzyme, while the complete enzyme with cofactor is called a holoenzyme.
The International Union of Pure and Applied Chemistry (IUPAC) defines "coenzyme" a little differently, namely as a low-molecular-weight, non-protein organic compound that is loosely attached, participating in enzymatic reactions as a dissociable carrier of chemical groups or electrons; a prosthetic group is defined as a tightly bound, nonpolypeptide unit in a protein that is regenerated in each enzymatic turnover.
Some enzymes or enzyme complexes require several cofactors. For example, the multienzyme complex pyruvate dehydrogenase at the junction of glycolysis and the citric acid cycle requires five organic cofactors and one metal ion: loosely bound thiamine pyrophosphate (TPP), covalently bound lipoamide and flavin adenine dinucleotide (FAD), cosubstrates nicotinamide adenine dinucleotide (NAD+) and coenzyme A (CoA), and a metal ion (Mg2+).
Organic cofactors are often vitamins or made from vitamins. Many contain the nucleotide adenosine monophosphate (AMP) as part of their structures, such as ATP, coenzyme A, FAD, and NAD+. This common structure may reflect a common evolutionary origin as part of ribozymes in an ancient RNA world. It has been suggested that the AMP part of the molecule can be considered to be a kind of "handle" by which the enzyme can "grasp" the coenzyme to switch it between different catalytic centers.
Classification
Cofactors can be divided into two major groups: organic cofactors, such as flavin or heme; and inorganic cofactors, such as the metal ions Mg2+, Cu+, Mn2+ and iron–sulfur clusters.
Organic cofactors are sometimes further divided into coenzymes and prosthetic groups. The term coenzyme refers specifically to enzymes and, as such, to the functional properties of a protein. On the other hand, "prosthetic group" emphasizes the nature of the binding of a cofactor to a protein (tight or covalent) and, thus, refers to a structural property. Different sources give slightly different definitions of coenzymes, cofactors, and prosthetic groups. Some consider tightly bound organic molecules as prosthetic groups and not as coenzymes, while others define all non-protein organic molecules needed for enzyme activity as coenzymes, and classify those that are tightly bound as coenzyme prosthetic groups. These terms are often used loosely.
A 1980 letter in Trends in Biochemistry Sciences noted the confusion in the literature and the essentially arbitrary distinction made between prosthetic groups and coenzymes group and proposed the following scheme. Here, cofactors were defined as an additional substance apart from protein and substrate that is required for enzyme activity and a prosthetic group as a substance that undergoes its whole catalytic cycle attached to a single enzyme molecule. However, the author could not arrive at a single all-encompassing definition of a "coenzyme" and proposed that this term be dropped from use in the literature.
Inorganic cofactors
Metal ions
Metal ions are common cofactors. The study of these cofactors falls under the area of bioinorganic chemistry. In nutrition, the list of essential trace elements reflects their role as cofactors. In humans this list commonly includes iron, magnesium, manganese, cobalt, copper, zinc, and molybdenum. Although chromium deficiency causes impaired glucose tolerance, no human enzyme that uses this metal as a cofactor has been identified. Iodine is also an essential trace element, but this element is used as part of the structure of thyroid hormones rather than as an enzyme cofactor. Calcium is another special case, in that it is required as a component of the human diet, and it is needed for the full activity of many enzymes, such as nitric oxide synthase, protein phosphatases, and adenylate kinase, but calcium activates these enzymes in allosteric regulation, often binding to these enzymes in a complex with calmodulin. Calcium is, therefore, a cell signaling molecule, and not usually considered a cofactor of the enzymes it regulates.
Other organisms require additional metals as enzyme cofactors, such as vanadium in the nitrogenase of the nitrogen-fixing bacteria of the genus Azotobacter, tungsten in the aldehyde ferredoxin oxidoreductase of the thermophilic archaean Pyrococcus furiosus, and even cadmium in the carbonic anhydrase from the marine diatom Thalassiosira weissflogii.
In many cases, the cofactor includes both an inorganic and organic component. One diverse set of examples is the heme proteins, which consist of a porphyrin ring coordinated to iron.
Iron–sulfur clusters
Iron–sulfur clusters are complexes of iron and sulfur atoms held within proteins by cysteinyl residues. They play both structural and functional roles, including electron transfer, redox sensing, and as structural modules.
Organic
Organic cofactors are small organic molecules (typically a molecular mass less than 1000 Da) that can be either loosely or tightly bound to the enzyme and directly participate in the reaction. In the latter case, when it is difficult to remove without denaturing the enzyme, it can be called a prosthetic group. There is no sharp division between loosely and tightly bound cofactors. Many such as NAD+ can be tightly bound in some enzymes, while it is loosely bound in others. Another example is thiamine pyrophosphate (TPP), which is tightly bound in transketolase or pyruvate decarboxylase, while it is less tightly bound in pyruvate dehydrogenase. Other coenzymes, flavin adenine dinucleotide (FAD), biotin, and lipoamide, for instance, are tightly bound. Tightly bound cofactors are, in general, regenerated during the same reaction cycle, while loosely bound cofactors can be regenerated in a subsequent reaction catalyzed by a different enzyme. In the latter case, the cofactor can also be considered a substrate or cosubstrate.
Vitamins can serve as precursors to many organic cofactors (e.g., vitamins B1, B2, B6, B12, niacin, folic acid) or as coenzymes themselves (e.g., vitamin C). However, vitamins do have other functions in the body. Many organic cofactors also contain a nucleotide, such as the electron carriers NAD and FAD, and coenzyme A, which carries acyl groups. Most of these cofactors are found in a huge variety of species, and some are universal to all forms of life. An exception to this wide distribution is a group of unique cofactors that evolved in methanogens, which are restricted to this group of archaea.
Vitamins and derivatives
Non-vitamins
Cofactors as metabolic intermediates
Metabolism involves a vast array of chemical reactions, but most fall under a few basic types of reactions that involve the transfer of functional groups. This common chemistry allows cells to use a small set of metabolic intermediates to carry chemical groups between different reactions. These group-transfer intermediates are the loosely bound organic cofactors, often called coenzymes.
Each class of group-transfer reaction is carried out by a particular cofactor, which is the substrate for a set of enzymes that produce it, and a set of enzymes that consume it. An example of this are the dehydrogenases that use nicotinamide adenine dinucleotide (NAD+) as a cofactor. Here, hundreds of separate types of enzymes remove electrons from their substrates and reduce NAD+ to NADH. This reduced cofactor is then a substrate for any of the reductases in the cell that require electrons to reduce their substrates.
Therefore, these cofactors are continuously recycled as part of metabolism. As an example, the total quantity of ATP in the human body is about 0.1 mole. This ATP is constantly being broken down into ADP, and then converted back into ATP. Thus, at any given time, the total amount of ATP + ADP remains fairly constant. The energy used by human cells requires the hydrolysis of 100 to 150 moles of ATP daily, which is around 50 to 75 kg. In typical situations, humans use up their body weight of ATP over the course of the day. This means that each ATP molecule is recycled 1000 to 1500 times daily.
Evolution
Organic cofactors, such as ATP and NADH, are present in all known forms of life and form a core part of metabolism. Such universal conservation indicates that these molecules evolved very early in the development of living things. At least some of the current set of cofactors may, therefore, have been present in the last universal ancestor, which lived about 4 billion years ago.
Organic cofactors may have been present even earlier in the history of life on Earth. The nucleotide adenosine is a cofactor for many basic metabolic enzymes such as transferases. It may be a remnant of the RNA world. Adenosine-based cofactors may have acted as adaptors that allowed enzymes and ribozymes to bind new cofactors through small modifications in existing adenosine-binding domains, which had originally evolved to bind a different cofactor. This process of adapting a pre-evolved structure for a novel use is known as exaptation.
Prebiotic origin of coenzymes. Like amino acids and nucleotides, certain vitamins and thus coenzymes can be created under early earth conditions. For instance, vitamin B3 can be synthesized with electric discharges applied to ethylene and ammonia. Similarly, pantetheine (a vitamin B5 derivative), a precursor of coenzyme A and thioester-dependent synthesis, can be formed spontaneously under evaporative conditions. Other coenzymes may have existed early on Earth, such as pterins (a derivative of vitamin B9), flavins (FAD, flavin mononucleotide = FMN), and riboflavin (vitamin B2).
Changes in coenzymes. A computational method, IPRO, recently predicted mutations that experimentally switched the cofactor specificity of Candida boidinii xylose reductase from NADPH to NADH.
Evolution of enzymes without coenzymes. If enzymes require a co-enzyme, how does the coenzyme evolve? The most likely scenario is that enzymes can function initially without their coenzymes and later recruit the coenzyme, even if the catalyzed reaction may not be as efficient or as fast. Examples are Alcohol Dehydrogenase (coenzyme: NAD⁺), Lactate Dehydrogenase (NAD⁺), Glutathione Reductase (NADPH).
History
The first organic cofactor to be discovered was NAD+, which was identified by Arthur Harden and William Young 1906. They noticed that adding boiled and filtered yeast extract greatly accelerated alcoholic fermentation in unboiled yeast extracts. They called the unidentified factor responsible for this effect a coferment. Through a long and difficult purification from yeast extracts, this heat-stable factor was identified as a nucleotide sugar phosphate by Hans von Euler-Chelpin. Other cofactors were identified throughout the early 20th century, with ATP being isolated in 1929 by Karl Lohmann, and coenzyme A being discovered in 1945 by Fritz Albert Lipmann.
The functions of these molecules were at first mysterious, but, in 1936, Otto Heinrich Warburg identified the function of NAD+ in hydride transfer. This discovery was followed in the early 1940s by the work of Herman Kalckar, who established the link between the oxidation of sugars and the generation of ATP. This confirmed the central role of ATP in energy transfer that had been proposed by Fritz Albert Lipmann in 1941. Later, in 1949, Morris Friedkin and Albert L. Lehninger proved that NAD+ linked metabolic pathways such as the citric acid cycle and the synthesis of ATP.
Protein-derived cofactors
In a number of enzymes, the moiety that acts as a cofactor is formed by post-translational modification of a part of the protein sequence. This often replaces the need for an external binding factor, such as a metal ion, for protein function. Potential modifications could be oxidation of aromatic residues, binding between residues, cleavage or ring-forming. These alterations are distinct from other post-translation protein modifications, such as phosphorylation, methylation, or glycosylation in that the amino acids typically acquire new functions. This increases the functionality of the protein; unmodified amino acids are typically limited to acid-base reactions, and the alteration of resides can give the protein electrophilic sites or the ability to stabilize free radicals. Examples of cofactor production include tryptophan tryptophylquinone (TTQ), derived from two tryptophan side chains, and 4-methylidene-imidazole-5-one (MIO), derived from an Ala-Ser-Gly motif. Characterization of protein-derived cofactors is conducted using X-ray crystallography and mass spectroscopy; structural data is necessary because sequencing does not readily identify the altered sites.
Non-enzymatic cofactors
The term is used in other areas of biology to refer more broadly to non-protein (or even protein) molecules that either activate, inhibit, or are required for the protein to function. For example, ligands such as hormones that bind to and activate receptor proteins are termed cofactors or coactivators, whereas molecules that inhibit receptor proteins are termed corepressors. One such example is the G protein-coupled receptor family of receptors, which are frequently found in sensory neurons. Ligand binding to the receptors activates the G protein, which then activates an enzyme to activate the effector. In order to avoid confusion, it has been suggested that such proteins that have ligand-binding mediated activation or repression be referred to as coregulators.
See also
Enzyme catalysis
Inorganic chemistry
Organometallic chemistry
Bioorganometallic chemistry
Cofactor engineering
References
Further reading
External links
Cofactors lecture (Powerpoint file)
The CoFactor Database
Enzymes | 0.788453 | 0.995053 | 0.784552 |
PubChem | PubChem is a database of chemical molecules and their activities against biological assays. The system is maintained by the National Center for Biotechnology Information (NCBI), a component of the National Library of Medicine, which is part of the United States National Institutes of Health (NIH). PubChem can be accessed for free through a web user interface. Millions of compound structures and descriptive datasets can be freely downloaded via FTP. PubChem contains multiple substance descriptions and small molecules with fewer than 100 atoms and 1,000 bonds. More than 80 database vendors contribute to the growing PubChem database.
History
PubChem was released in 2004 as a component of the Molecular Libraries Program (MLP) of the NIH. As of November 2015, PubChem contains more than 150 million depositor-provided substance descriptions, 60 million unique chemical structures, and 225 million biological activity test results (from over 1 million assay experiments performed on more than 2 million small-molecules covering almost 10,000 unique protein target sequences that correspond to more than 5,000 genes). It also contains RNA interference (RNAi) screening assays that target over 15,000 genes.
As of August 2018, PubChem contains 247.3 million substance descriptions, 96.5 million unique chemical structures, contributed by 629 data sources from 40 countries. It also contains 237 million bioactivity test results from 1.25 million biological assays, covering >10,000 target protein sequences.
As of 2020, with data integration from over 100 new sources, PubChem contains more than 293 million depositor-provided substance descriptions, 111 million unique chemical structures, and 271 million bioactivity data points from 1.2 million biological assays experiments.
Databases
PubChem consists of three dynamically growing primary databases. As of 5 November 2020 (number of BioAssays is unchanged):
Compounds, 111 million entries (up from 94 million entries in 2017), contains pure and characterized chemical compounds.
Substances, 293 million entries (up from 236 million entries in 2017 and 163 million in Sept. 2014), contains also mixtures, extracts, complexes and uncharacterized substances.
BioAssay, bioactivity results from 1.25 million (up from 6,000 in Sept. 2014) high-throughput screening programs with several million values.
Searching
Searching the databases is possible for a broad range of properties including chemical structure, name fragments, chemical formula, molecular weight, XLogP, and hydrogen bond donor and acceptor count.
PubChem contains its own online molecule editor with SMILES/SMARTS and InChI support that allows the import and export of all common chemical file formats to search for structures and fragments.
Each hit provides information about synonyms, chemical properties, chemical structure including SMILES and InChI strings, bioactivity, and links to structurally related compounds and other NCBI databases like PubMed.
In the text search form the database fields can be searched by adding the field name in square brackets to the search term. A numeric range is represented by two numbers separated by a colon. The search terms and field names are case-insensitive. Parentheses and the logical operators AND, OR, and NOT can be used. AND is assumed if no operator is used.
Example (Lipinski's Rule of Five):
0:500[mw] 0:5[hbdc] 0:10[hbac] -5:5[logp]
Database fields
See also
Chemical database
CAS Common Chemistry - run by the American Chemical Society
Comparative Toxicogenomics Database - run by North Carolina State University
ChEMBL - run by European Bioinformatics Institute
ChemSpider - run by UK's Royal Society of Chemistry
DrugBank - run by the University of Alberta
IUPAC - run by Swiss-based International Union of Pure and Applied Chemistry (IUPAC)
Moltable - run by India's National Chemical Laboratory
PubChem - run by the National Institute of Health, USA
BindingDB - run by the University of California, San Diego
SCRIPDB - run by the University of Toronto, Canada
National Center for Biotechnology Information (NCBI) - run by the National Institute of Health, USA
Entrez - run by the National Institute of Health, USA
GenBank - run by the National Institute of Health, USA
References
External links
Chemical databases
Biological databases
National Institutes of Health
Public-domain software with source code | 0.789673 | 0.993458 | 0.784506 |
Computational materials science | Computational materials science and engineering uses modeling, simulation, theory, and informatics to understand materials. The main goals include discovering new materials, determining material behavior and mechanisms, explaining experiments, and exploring materials theories. It is analogous to computational chemistry and computational biology as an increasingly important subfield of materials science.
Introduction
Just as materials science spans all length scales, from electrons to components, so do its computational sub-disciplines. While many methods and variations have been and continue to be developed, seven main simulation techniques, or motifs, have emerged.
These computer simulation methods use underlying models and approximations to understand material behavior in more complex scenarios than pure theory generally allows and with more detail and precision than is often possible from experiments. Each method can be used independently to predict materials properties and mechanisms, to feed information to other simulation methods run separately or concurrently, or to directly compare or contrast with experimental results.
One notable sub-field of computational materials science is integrated computational materials engineering (ICME), which seeks to use computational results and methods in conjunction with experiments, with a focus on industrial and commercial application. Major current themes in the field include uncertainty quantification and propagation throughout simulations for eventual decision making, data infrastructure for sharing simulation inputs and results, high-throughput materials design and discovery, and new approaches given significant increases in computing power and the continuing history of supercomputing.
Materials simulation methods
Electronic structure
Electronic structure methods solve the Schrödinger equation to calculate the energy of a system of electrons and atoms, the fundamental units of condensed matter.
Many variations of electronic structure methods exist of varying computational complexity, with a range of trade-offs between speed and accuracy.
Density functional theory
Due to its balance of computational cost and predictive capability density functional theory (DFT) has the most significant use in materials science. DFT most often refers to the calculation of the lowest energy state of the system; however, molecular dynamics (atomic motion through time) can be run with DFT computing forces between atoms.
While DFT and many other electronic structures methods are described as ab initio, there are still approximations and inputs. Within DFT there are increasingly complex, accurate, and slow approximations underlying the simulation because the exact exchange-correlation functional is not known. The simplest model is the Local-density approximation (LDA), becoming more complex with the generalized-gradient approximation (GGA) and beyond.
An additional common approximation is to use a pseudopotential in place of core electrons, significantly speeding up simulations.
Atomistic methods
This section discusses the two major atomic simulation methods in materials science. Other particle-based methods include material point method and particle-in-cell, most often used for solid mechanics and plasma physics, respectively.
Molecular dynamics
The term Molecular dynamics (MD) is the historical name used to classify simulations of classical atomic motion through time. Typically, interactions between atoms are defined and fit to both experimental and electronic structure data with a wide variety of models, called interatomic potentials. With the interactions prescribed (forces), Newtonian motion is numerically integrated. The forces for MD can also be calculated using electronic structure methods based on either the Born-Oppenheimer Approximation or Car-Parrinello approaches.
The simplest models include only van der Waals type attractions and steep repulsion to keep atoms apart, the nature of these models are derived from dispersion forces. Increasingly more complex models include effects due to coulomb interactions (e.g. ionic charges in ceramics), covalent bonds and angles (e.g. polymers), and electronic charge density (e.g. metals). Some models use fixed bonds, defined at the start of the simulation, while others have dynamic bonding. More recent efforts strive for robust, transferable models with generic functional forms: spherical harmonics, Gaussian kernels, and neural networks. In addition, MD can be used to simulate groupings of atoms within generic particles, called coarse-grained modeling, e.g. creating one particle per monomer within a polymer.
Kinetic Monte Carlo
Monte Carlo in the context of materials science most often refers to atomistic simulations relying on rates. In kinetic Monte Carlo (kMC) rates for all possible changes within the system are defined and probabilistically evaluated. Because there is no restriction of directly integrating motion (as in molecular dynamics), kMC methods are able to simulate significantly different problems with much longer timescales.
Mesoscale methods
The methods listed here are among the most common and the most directly tied to materials science specifically, where atomistic and electronic structure calculations are also widely used in computational chemistry and computational biology and continuum level simulations are common in a wide array of computational science application domains.
Other methods within materials science include cellular automata for solidification and grain growth, Potts model approaches for grain evolution and other Monte Carlo techniques, as well as direct simulation of grain structures analogous to dislocation dynamics.
Dislocation dynamics
Plastic deformation in metals is dominated by the movement of dislocations, which are crystalline defects in materials with line type character. Rather than simulating the movement of tens of billions of atoms to model plastic deformation, which would be prohibitively computationally expensive, discrete dislocation dynamics (DDD) simulates the movement of dislocation lines. The overall goal of dislocation dynamics is to determine the movement of a set of dislocations given their initial positions, and external load and interacting microstructure. From this, macroscale deformation behavior can be extracted from the movement of individual dislocations by theories of plasticity.
A typical DDD simulation goes as follows. A dislocation line can be modelled as a set of nodes connected by segments. This is similar to a mesh used in finite element modelling. Then, the forces on each of the nodes of the dislocation are calculated. These forces include any externally applied forces, forces due to the dislocation interacting with itself or other dislocations, forces from obstacles such as solutes or precipitates, and the drag force on the dislocation due to its motion, which is proportional to its velocity. The general method behind a DDD simulation is to calculate the forces on a dislocation at each of its nodes, from which the velocity of the dislocation at its nodes can be extracted. Then, the dislocation is moved forward according to this velocity and a given timestep. This procedure is then repeated. Over time, the dislocation may encounter enough obstacles such that it can no longer move and its velocity is near zero, at which point the simulation can be stopped and a new experiment can be conducted with this new dislocation arrangement.
Both small-scale and large-scale dislocation simulations exist. For example, 2D dislocation models have been used to model the glide of a dislocation through a single plane as it interacts with various obstacles, such as precipitates. This further captures phenomena such as shearing and bowing of precipitates. The drawback to 2D DDD simulations is that phenomena involving movement out of a glide plane cannot be captured, such as cross slip and climb, although they are easier to run computationally. Small 3D DDD simulations have been used to simulate phenomena such as dislocation multiplication at Frank-Read sources, and larger simulations can capture work hardening in a metal with many dislocations, which interact with each other and can multiply. A number of 3D DDD codes exist, such as ParaDiS, microMegas, and MDDP, among others.
There are other methods for simulating dislocation motion, from full molecular dynamics simulations, continuum dislocation dynamics, and phase field models.
Phase field
Phase field methods are focused on phenomena dependent on interfaces and interfacial motion. Both the free energy function and the kinetics (mobilities) are defined in order to propagate the interfaces within the system through time.
Crystal plasticity
Crystal plasticity simulates the effects of atomic-based, dislocation motion without directly resolving either. Instead, the crystal orientations are updated through time with elasticity theory, plasticity through yield surfaces, and hardening laws. In this way, the stress-strain behavior of a material can be determined.
Continuum simulation
Finite element method
Finite element methods divide systems in space and solve the relevant physical equations throughout that decomposition. This ranges from thermal, mechanical, electromagnetic, to other physical phenomena. It is important to note from a materials science perspective that continuum methods generally ignore material heterogeneity and assume local materials properties to be identical throughout the system.
Materials modeling methods
All of the simulation methods described above contain models of materials behavior. The exchange-correlation functional for density functional theory, interatomic potential for molecular dynamics, and free energy functional for phase field simulations are examples. The degree to which each simulation method is sensitive to changes in the underlying model can be drastically different. Models themselves are often directly useful for materials science and engineering, not only to run a given simulation.
CALPHAD
Phase diagrams are integral to materials science and the development computational phase diagrams stands as one of the most important and successful examples of ICME. The Calculation of PHase Diagram (CALPHAD) method does not generally speaking constitute a simulation, but the models and optimizations instead result in phase diagrams to predict phase stability, extremely useful in materials design and materials process optimization.
Comparison of methods
For each material simulation method, there is a fundamental unit, characteristic length and time scale, and associated model(s).
Multi-scale simulation
Many of the methods described can be combined, either running simultaneously or separately, feeding information between length scales or accuracy levels.
Concurrent multi-scale
Concurrent simulations in this context means methods used directly together, within the same code, with the same time step, and with direct mapping between the respective fundamental units.
One type of concurrent multiscale simulation is quantum mechanics/molecular mechanics (QM/MM). This involves running a small portion (often a molecule or protein of interest) with a more accurate electronic structure calculation and surrounding it with a larger region of fast running, less accurate classical molecular dynamics. Many other methods exist, such as atomistic-continuum simulations, similar to QM/MM except using molecular dynamics and the finite element method as the fine (high-fidelity) and coarse (low-fidelity), respectively.
Hierarchical multi-scale
Hierarchical simulation refers to those which directly exchange information between methods, but are run in separate codes, with differences in length and/or time scales handled through statistical or interpolative techniques.
A common method of accounting for crystal orientation effects together with geometry embeds crystal plasticity within finite element simulations.
Model development
Building a materials model at one scale often requires information from another, lower scale. Some examples are included here.
The most common scenario for classical molecular dynamics simulations is to develop the interatomic model directly using density functional theory, most often electronic structure calculations. Classical MD can therefore be considered a hierarchical multi-scale technique, as well as a coarse-grained method (ignoring electrons). Similarly, coarse grained molecular dynamics are reduced or simplified particle simulations directly trained from all-atom MD simulations. These particles can represent anything from carbon-hydrogen pseudo-atoms, entire polymer monomers, to powder particles.
Density functional theory is also often used to train and develop CALPHAD-based phase diagrams.
Software and tools
Each modeling and simulation method has a combination of commercial, open-source, and lab-based codes. Open source software is becoming increasingly common, as are community codes which combine development efforts together. Examples include Quantum ESPRESSO (DFT), LAMMPS (MD), ParaDIS (DD), FiPy (phase field), and MOOSE (Continuum). In addition, open software from other communities is often useful for materials science, e.g. GROMACS developed within computational biology.
Conferences
All major materials science conferences include computational research. Focusing entirely on computational efforts, the TMS ICME World Congress meets biannually. The Gordon Research Conference on Computational Materials Science and Engineering began in 2020. Many other method specific smaller conferences are also regularly organized.
Journals
Many materials science journals, as well as those from related disciplines welcome computational materials research. Those dedicated to the field include Computational Materials Science, Modelling and Simulation in Materials Science and Engineering, and npj Computational Materials.
Related fields
Computational materials science is one sub-discipline of both computational science and computational engineering, containing significant overlap with computational chemistry and computational physics. In addition, many atomistic methods are common between computational chemistry, computational biology, and CMSE; similarly, many continuum methods overlap with many other fields of computational engineering.
See also
References
External links
TMS World Congress on Integrated Computational Materials Engineering (ICME)
nanoHUB computational materials resources
Computational science
Computational physics | 0.804383 | 0.975283 | 0.784501 |
ChEBI | Chemical Entities of Biological Interest, also known as ChEBI, is a chemical database and ontology of molecular entities focused on 'small' chemical compounds, that is part of the Open Biomedical Ontologies (OBO) effort at the European Bioinformatics Institute (EBI). The term "molecular entity" refers to any "constitutionally or isotopically distinct atom, molecule, ion, ion pair, radical, radical ion, complex, conformer, etc., identifiable as a separately distinguishable entity". The molecular entities in question are either products of nature or synthetic products which have potential bioactivity. Molecules directly encoded by the genome, such as nucleic acids, proteins and peptides derived from proteins by proteolytic cleavage, are not as a rule included in ChEBI.
ChEBI uses nomenclature, symbolism and terminology endorsed by the International Union of Pure and Applied Chemistry (IUPAC) and nomenclature committee of the International Union of Biochemistry and Molecular Biology (NC-IUBMB).
Scope and access
All data in the database is non-proprietary or is derived from a non-proprietary source. It is thus freely accessible and available to anyone. In addition, each data item is fully traceable and explicitly referenced to the original source. It is related in scope other databases such as ChEMBL, ChemSpider, DrugBank, MetaboLights and PubChem.
ChEBI data is available through a public web application, web services, SPARQL endpoint and downloads.
References
Biological databases
Chemical databases
Chemical nomenclature
Science and technology in Cambridgeshire
South Cambridgeshire District | 0.798638 | 0.982204 | 0.784425 |
Metabolic pathway | In biochemistry, a metabolic pathway is a linked series of chemical reactions occurring within a cell. The reactants, products, and intermediates of an enzymatic reaction are known as metabolites, which are modified by a sequence of chemical reactions catalyzed by enzymes. In most cases of a metabolic pathway, the product of one enzyme acts as the substrate for the next. However, side products are considered waste and removed from the cell.
Different metabolic pathways function in the position within a eukaryotic cell and the significance of the pathway in the given compartment of the cell. For instance, the electron transport chain and oxidative phosphorylation all take place in the mitochondrial membrane. In contrast, glycolysis, pentose phosphate pathway, and fatty acid biosynthesis all occur in the cytosol of a cell.
There are two types of metabolic pathways that are characterized by their ability to either synthesize molecules with the utilization of energy (anabolic pathway), or break down complex molecules and release energy in the process (catabolic pathway).
The two pathways complement each other in that the energy released from one is used up by the other. The degradative process of a catabolic pathway provides the energy required to conduct the biosynthesis of an anabolic pathway. In addition to the two distinct metabolic pathways is the amphibolic pathway, which can be either catabolic or anabolic based on the need for or the availability of energy.
Pathways are required for the maintenance of homeostasis within an organism and the flux of metabolites through a pathway is regulated depending on the needs of the cell and the availability of the substrate. The end product of a pathway may be used immediately, initiate another metabolic pathway or be stored for later use. The metabolism of a cell consists of an elaborate network of interconnected pathways that enable the synthesis and breakdown of molecules (anabolism and catabolism).
Overview
Each metabolic pathway consists of a series of biochemical reactions that are connected by their intermediates: the products of one reaction are the substrates for subsequent reactions, and so on. Metabolic pathways are often considered to flow in one direction. Although all chemical reactions are technically reversible, conditions in the cell are often such that it is thermodynamically more favorable for flux to proceed in one direction of a reaction. For example, one pathway may be responsible for the synthesis of a particular amino acid, but the breakdown of that amino acid may occur via a separate and distinct pathway. One example of an exception to this "rule" is the metabolism of glucose. Glycolysis results in the breakdown of glucose, but several reactions in the glycolysis pathway are reversible and participate in the re-synthesis of glucose (gluconeogenesis).
Glycolysis was the first metabolic pathway discovered:
As glucose enters a cell, it is immediately phosphorylated by ATP to glucose 6-phosphate in the irreversible first step.
In times of excess lipid or protein energy sources, certain reactions in the glycolysis pathway may run in reverse to produce glucose 6-phosphate, which is then used for storage as glycogen or starch.
Metabolic pathways are often regulated by feedback inhibition.
Some metabolic pathways flow in a 'cycle' wherein each component of the cycle is a substrate for the subsequent reaction in the cycle, such as in the Krebs Cycle (see below).
Anabolic and catabolic pathways in eukaryotes often occur independently of each other, separated either physically by compartmentalization within organelles or separated biochemically by the requirement of different enzymes and co-factors.
Major metabolic pathways
Catabolic pathway (catabolism)
A catabolic pathway is a series of reactions that bring about a net release of energy in the form of a high energy phosphate bond formed with the energy carriers adenosine diphosphate (ADP) and guanosine diphosphate (GDP) to produce adenosine triphosphate (ATP) and guanosine triphosphate (GTP), respectively. The net reaction is, therefore, thermodynamically favorable, for it results in a lower free energy for the final products. A catabolic pathway is an exergonic system that produces chemical energy in the form of ATP, GTP, NADH, NADPH, FADH2, etc. from energy containing sources such as carbohydrates, fats, and proteins. The end products are often carbon dioxide, water, and ammonia. Coupled with an endergonic reaction of anabolism, the cell can synthesize new macromolecules using the original precursors of the anabolic pathway. An example of a coupled reaction is the phosphorylation of fructose-6-phosphate to form the intermediate fructose-1,6-bisphosphate by the enzyme phosphofructokinase accompanied by the hydrolysis of ATP in the pathway of glycolysis. The resulting chemical reaction within the metabolic pathway is highly thermodynamically favorable and, as a result, irreversible in the cell.
Fructose-6-Phosphate + ATP -> Fructose-1,6-Bisphosphate + ADP
Cellular respiration
A core set of energy-producing catabolic pathways occur within all living organisms in some form. These pathways transfer the energy released by breakdown of nutrients into ATP and other small molecules used for energy (e.g. GTP, NADPH, FADH2). All cells can perform anaerobic respiration by glycolysis. Additionally, most organisms can perform more efficient aerobic respiration through the citric acid cycle and oxidative phosphorylation. Additionally plants, algae and cyanobacteria are able to use sunlight to anabolically synthesize compounds from non-living matter by photosynthesis.
Anabolic pathway (anabolism)
In contrast to catabolic pathways, anabolic pathways require an energy input to construct macromolecules such as polypeptides, nucleic acids, proteins, polysaccharides, and lipids. The isolated reaction of anabolism is unfavorable in a cell due to a positive Gibbs free energy (+ΔG). Thus, an input of chemical energy through a coupling with an exergonic reaction is necessary. The coupled reaction of the catabolic pathway affects the thermodynamics of the reaction by lowering the overall activation energy of an anabolic pathway and allowing the reaction to take place. Otherwise, an endergonic reaction is non-spontaneous.
An anabolic pathway is a biosynthetic pathway, meaning that it combines smaller molecules to form larger and more complex ones. An example is the reversed pathway of glycolysis, otherwise known as gluconeogenesis, which occurs in the liver and sometimes in the kidney to maintain proper glucose concentration in the blood and supply the brain and muscle tissues with adequate amount of glucose. Although gluconeogenesis is similar to the reverse pathway of glycolysis, it contains four distinct enzymes(pyruvate carboxylase, phosphoenolpyruvate carboxykinase, fructose 1,6-bisphosphatase, glucose 6-phosphatase) from glycolysis that allow the pathway to occur spontaneously.
Amphibolic pathway (Amphibolism)
An amphibolic pathway is one that can be either catabolic or anabolic based on the availability of or the need for energy. The currency of energy in a biological cell is adenosine triphosphate (ATP), which stores its energy in the phosphoanhydride bonds. The energy is utilized to conduct biosynthesis, facilitate movement, and regulate active transport inside of the cell. Examples of amphibolic pathways are the citric acid cycle and the glyoxylate cycle. These sets of chemical reactions contain both energy producing and utilizing pathways. To the right is an illustration of the amphibolic properties of the TCA cycle.
The glyoxylate shunt pathway is an alternative to the tricarboxylic acid (TCA) cycle, for it redirects the pathway of TCA to prevent full oxidation of carbon compounds, and to preserve high energy carbon sources as future energy sources. This pathway occurs only in plants and bacteria and transpires in the absence of glucose molecules.
Regulation
The flux of the entire pathway is regulated by the rate-determining steps. These are the slowest steps in a network of reactions. The rate-limiting step occurs near the beginning of the pathway and is regulated by feedback inhibition, which ultimately controls the overall rate of the pathway. The metabolic pathway in the cell is regulated by covalent or non-covalent modifications. A covalent modification involves an addition or removal of a chemical bond, whereas a non-covalent modification (also known as allosteric regulation) is the binding of the regulator to the enzyme via hydrogen bonds, electrostatic interactions, and Van der Waals forces.
The rate of turnover in a metabolic pathway, also known as the metabolic flux, is regulated based on the stoichiometric reaction model, the utilization rate of metabolites, and the translocation pace of molecules across the lipid bilayer. The regulation methods are based on experiments involving 13C-labeling, which is then analyzed by nuclear magnetic resonance (NMR) or gas chromatography–mass spectrometry (GC–MS)–derived mass compositions. The aforementioned techniques synthesize a statistical interpretation of mass distribution in proteinogenic amino acids to the catalytic activities of enzymes in a cell.
Clinical applications in targeting metabolic pathways
Targeting oxidative phosphorylation
Metabolic pathways can be targeted for clinically therapeutic uses. Within the mitochondrial metabolic network, for instance, there are various pathways that can be targeted by compounds to prevent cancer cell proliferation. One such pathway is oxidative phosphorylation (OXPHOS) within the electron transport chain (ETC). Various inhibitors can downregulate the electrochemical reactions that take place at Complex I, II, III, and IV, thereby preventing the formation of an electrochemical gradient and downregulating the movement of electrons through the ETC. The substrate-level phosphorylation that occurs at ATP synthase can also be directly inhibited, preventing the formation of ATP that is necessary to supply energy for cancer cell proliferation. Some of these inhibitors, such as lonidamine and atovaquone, which inhibit Complex II and Complex III, respectively, are currently undergoing clinical trials for FDA approval. Other non-FDA-approved inhibitors have still shown experimental success in vitro.
Targeting Heme
Heme, an important prosthetic group present in Complexes I, II, and IV can also be targeted, since heme biosynthesis and uptake have been correlated with increased cancer progression. Various molecules can inhibit heme via different mechanisms. For instance, succinylacetone has been shown to decrease heme concentrations by inhibiting δ-aminolevulinic acid in murine erythroleukemia cells. The primary structure of heme-sequestering peptides, such as HSP1 and HSP2, can be modified to downregulate heme concentrations and reduce proliferation of non-small lung cancer cells.
Targeting the tricarboxylic acid cycle and glutaminolysis
The tricarboxylic acid cycle (TCA) and glutaminolysis can also be targeted for cancer treatment, since they are essential for the survival and proliferation of cancer cells. Ivosidenib and enasidenib, two FDA-approved cancer treatments, can arrest the TCA cycle of cancer cells by inhibiting isocitrate dehydrogenase-1 (IDH1) and isocitrate dehydrogenase-2 (IDH2), respectively. Ivosidenib is specific to acute myeloid leukemia (AML) and cholangiocarcinoma, whereas enasidenib is specific to just acute myeloid leukemia (AML).
In a clinical trial consisting of 185 adult patients with cholangiocarcinoma and an IDH-1 mutation, there was a statistically significant improvement (p<0.0001; HR: 0.37) in patients randomized to ivosidenib. Still, some of the adverse side effects in these patients included fatigue, nausea, diarrhea, decreased appetite, ascites, and anemia. In a clinical trial consisting of 199 adult patients with AML and an IDH2 mutation, 23% of patients experienced complete response (CR) or complete response with partial hematologic recovery (CRh) lasting a median of 8.2 months while on enasidenib. Of the 157 patients who required transfusion at the beginning of the trial, 34% no longer required transfusions during the 56-day time period on enasidenib. Of the 42% of patients who did not require transfusions at the beginning of the trial, 76% still did not require a transfusion by the end of the trial. Side effects of enasidenib included nausea, diarrhea, elevated bilirubin and, most notably, differentiation syndrome.
Glutaminase (GLS), the enzyme responsible for converting glutamine to glutamate via hydrolytic deamidation during the first reaction of glutaminolysis, can also be targeted. In recent years, many small molecules, such as azaserine, acivicin, and CB-839 have been shown to inhibit glutaminase, thus reducing cancer cell viability and inducing apoptosis in cancer cells. Due to its effective antitumor ability in several cancer types such as ovarian, breast and lung cancers, CB-839 is the only GLS inhibitor currently undergoing clinical studies for FDA-approval.
Genetic engineering of metabolic pathways
Many metabolic pathways are of commercial interest. For instance, the production of many antibiotics or other drugs requires complex pathways. The pathways to produce such compounds can be transplanted into microbes or other more suitable organism for production purposes. For example, the world's supply of the anti-cancer drug vinblastine is produced by relatively ineffient extraction and purification of the precursors vindoline and catharanthine from the plant Catharanthus roseus, which are then chemically converted into vinblastine. The biosynthetic pathway to produce vinblastine, including 30 enzymatic steps, has been transferred into yeast cells which is a convenient system to grow in large amounts. With these genetic modifications yeast can use its own metabolites geranyl pyrophosphate and tryptophan to produce the precursors of catharanthine and vindoline. This process required 56 genetic edits, including expression of 34 heterologous genes from plants in yeast cells.
See also
KaPPA-View4 (2010)
Metabolism
Metabolic control analysis
Metabolic network
Metabolic network modelling
Metabolic engineering
Biochemical systems equation
Linear biochemical pathway
References
External links
Full map of metabolic pathways
Biochemical pathways, Gerhard Michal
Overview Map from BRENDA
BioCyc: Metabolic network models for thousands of sequenced organisms
KEGG: Kyoto Encyclopedia of Genes and Genomes
Reactome, a database of reactions, pathways and biological processes
MetaCyc: A database of experimentally elucidated metabolic pathways (2,200+ pathways from more than 2,500 organisms)
MetaboMAPS: A platform for pathway sharing and data visualization on metabolic pathways
The Pathway Localization database (PathLocdb)
DAVID: Visualize genes on pathway maps
Wikipathways: pathways for the people
ConsensusPathDB
metpath: Integrated interactive metabolic chart | 0.788827 | 0.994299 | 0.78433 |
Environmental chemistry | Environmental chemistry is the scientific study of the chemical and biochemical phenomena that occur in natural places. It should not be confused with green chemistry, which seeks to reduce potential pollution at its source. It can be defined as the study of the sources, reactions, transport, effects, and fates of chemical species in the air, soil, and water environments; and the effect of human activity and biological activity on these. Environmental chemistry is an interdisciplinary science that includes atmospheric, aquatic and soil chemistry, as well as heavily relying on analytical chemistry and being related to environmental and other areas of science.
Environmental chemistry involves first understanding how the uncontaminated environment works, which chemicals in what concentrations are present naturally, and with what effects. Without this it would be impossible to accurately study the effects humans have on the environment through the release of chemicals.
Environmental chemists draw on a range of concepts from chemistry and various environmental sciences to assist in their study of what is happening to a chemical species in the environment. Important general concepts from chemistry include understanding chemical reactions and equations, solutions, units, sampling, and analytical techniques.
Contaminant
A contaminant is a substance present in nature at a level higher than fixed levels or that would not otherwise be there. This may be due to human activity and bioactivity. The term contaminant is often used interchangeably with pollutant, which is a substance that detrimentally impacts the surrounding environment. While a contaminant is sometimes a substance in the environment as a result of human activity, but without harmful effects, it sometimes the case that toxic or harmful effects from contamination only become apparent at a later date.
The "medium" such as soil or organism such as fish affected by the pollutant or contaminant is called a receptor, whilst a sink is a chemical medium or species that retains and interacts with the pollutant such as carbon sink and its effects by microbes.
Environmental indicators
Chemical measures of water quality include dissolved oxygen (DO), chemical oxygen demand (COD), biochemical oxygen demand (BOD), total dissolved solids (TDS), pH, nutrients (nitrates and phosphorus), heavy metals, soil chemicals (including copper, zinc, cadmium, lead and mercury), and pesticides.
Applications
Environmental chemistry is used by the Environment Agency in England, Natural Resources Wales, the United States Environmental Protection Agency, the Association of Public Analysts, and other environmental agencies and research bodies around the world to detect and identify the nature and source of pollutants. These can include:
Heavy metal contamination of land by industry. These can then be transported into water bodies and be taken up by living organisms such as animals and plants.
PAHs (Polycyclic Aromatic Hydrocarbon) in large bodies of water contaminated by oil spills or leaks. Many of the PAHs are carcinogens and are extremely toxic. They are regulated by concentration (ppb) using environmental chemistry and chromatography laboratory testing.
Nutrients leaching from agricultural land into water courses, which can lead to algal blooms and eutrophication.
Urban runoff of pollutants washing off impervious surfaces (roads, parking lots, and rooftops) during rain storms. Typical pollutants include gasoline, motor oil and other hydrocarbon compounds, metals, nutrients and sediment (soil).
Organometallic compounds.
Methods
Quantitative chemical analysis is a key part of environmental chemistry, since it provides the data that frame most environmental studies.
Common analytical techniques used for quantitative determinations in environmental chemistry include classical wet chemistry, such as gravimetric, titrimetric and electrochemical methods. More sophisticated approaches are used in the determination of trace metals and organic compounds. Metals are commonly measured by atomic spectroscopy and mass spectrometry: Atomic Absorption Spectrophotometry (AAS) and Inductively Coupled Plasma Atomic Emission (ICP-AES) or Inductively Coupled Plasma Mass Spectrometric (ICP-MS) techniques. Organic compounds, including PAHs, are commonly measured also using mass spectrometric methods, such as Gas chromatography-mass spectrometry (GC/MS) and Liquid chromatography-mass spectrometry (LC/MS). Tandem Mass spectrometry MS/MS and High Resolution/Accurate Mass spectrometry HR/AM offer sub part per trillion detection. Non-MS methods using GCs and LCs having universal or specific detectors are still staples in the arsenal of available analytical tools.
Other parameters often measured in environmental chemistry are radiochemicals. These are pollutants which emit radioactive materials, such as alpha and beta particles, posing danger to human health and the environment. Particle counters and Scintillation counters are most commonly used for these measurements. Bioassays and immunoassays are utilized for toxicity evaluations of chemical effects on various organisms. Polymerase Chain Reaction PCR is able to identify species of bacteria and other organisms through specific DNA and RNA gene isolation and amplification and is showing promise as a valuable technique for identifying environmental microbial contamination.
Published analytical methods
Peer-reviewed test methods have been published by government agencies and private research organizations. Approved published methods must be used when testing to demonstrate compliance with regulatory requirements.
Notable environmental chemists
Joan Berkowitz
Paul Crutzen (Nobel Prize in Chemistry, 1995)
Philip Gschwend
Alice Hamilton
John M. Hayes
Charles David Keeling
Ralph Keeling
Mario Molina (Nobel Prize in Chemistry, 1995)
James J. Morgan
Clair Patterson
Roger Revelle
Sherry Roland (Nobel Prize in Chemistry, 1995)
Robert Angus Smith
Susan Solomon
Werner Stumm
Ellen Swallow Richards
Hans Suess
John Tyndall
See also
Environmental monitoring
Freshwater environmental quality parameters
Green chemistry
Green Chemistry Journal
Journal of Environmental Monitoring
Important publications in Environmental chemistry
List of chemical analysis methods
References
Further reading
Stanley E Manahan. Environmental Chemistry. CRC Press. 2004. .
Rene P Schwarzenbach, Philip M Gschwend, Dieter M Imboden. Environmental Organic Chemistry, Second edition. Wiley-Interscience, Hoboken, New Jersey, 2003. .
NCERT XI textbook.[ unit 14]
External links
List of links for Environmental Chemistry - from the WWW Virtual Library
International Journal of Environmental Analytical Chemistry
Biochemistry
Chemistry
Water pollution | 0.795124 | 0.986378 | 0.784292 |
Chemical synthesis | Chemical synthesis (chemical combination) is the artificial execution of chemical reactions to obtain one or several products. This occurs by physical and chemical manipulations usually involving one or more reactions. In modern laboratory uses, the process is reproducible and reliable.
A chemical synthesis involves one or more compounds (known as reagents or reactants) that will experience a transformation when subjected to certain conditions. Various reaction types can be applied to formulate a desired product. This requires mixing the compounds in a reaction vessel, such as a chemical reactor or a simple round-bottom flask. Many reactions require some form of processing ("work-up") or purification procedure to isolate the final product.
The amount produced by chemical synthesis is known as the reaction yield. Typically, yields are expressed as a mass in grams (in a laboratory setting) or as a percentage of the total theoretical quantity that could be produced based on the limiting reagent. A side reaction is an unwanted chemical reaction occurring which reduces the desired yield. The word synthesis was used first in a chemical context by the chemist Hermann Kolbe.
Strategies
Many strategies exist in chemical synthesis that are more complicated than simply converting a reactant A to a reaction product B directly. For multistep synthesis, a chemical compound is synthesized by a series of individual chemical reactions, each with its own work-up. For example, a laboratory synthesis of paracetamol can consist of three sequential parts. For cascade reactions, multiple chemical transformations occur within a single reactant, for multi-component reactions as many as 11 different reactants form a single reaction product and for a "telescopic synthesis" one reactant experiences multiple transformations without isolation of intermediates.
Organic synthesis
Organic synthesis is a special type of chemical synthesis dealing with the synthesis of organic compounds. For the total synthesis of a complex product, multiple procedures in sequence may be required to synthesize the product of interest, requiring a large amount of time. Skill in organic synthesis is prized among chemists and the synthesis of exceptionally valuable or difficult compounds has won chemists such as Robert Burns Woodward a Nobel Prize in Chemistry. A purely synthetic chemical synthesis begins with basic lab compounds. A semisynthetic process starts with natural products from plants or animals and then modifies them into new compounds.
Inorganic synthesis
Inorganic synthesis and organometallic synthesis are applied to the preparation of compounds with significant non-organic content. An illustrative example is the preparation of the anti-cancer drug cisplatin from potassium tetrachloroplatinate.
See also
Beilstein database
Biosynthesis
Chemical engineering
Click chemistry
Electrosynthesis
Methods in Organic Synthesis
Organic synthesis
Peptide synthesis
Total synthesis
Automated synthesis
References
External links
The Organic Synthesis Archive
Natural product syntheses
Chemistry | 0.790393 | 0.992235 | 0.784256 |
Reaction rate | The reaction rate or rate of reaction is the speed at which a chemical reaction takes place, defined as proportional to the increase in the concentration of a product per unit time and to the decrease in the concentration of a reactant per unit time. Reaction rates can vary dramatically. For example, the oxidative rusting of iron under Earth's atmosphere is a slow reaction that can take many years, but the combustion of cellulose in a fire is a reaction that takes place in fractions of a second. For most reactions, the rate decreases as the reaction proceeds. A reaction's rate can be determined by measuring the changes in concentration over time.
Chemical kinetics is the part of physical chemistry that concerns how rates of chemical reactions are measured and predicted, and how reaction-rate data can be used to deduce probable reaction mechanisms. The concepts of chemical kinetics are applied in many disciplines, such as chemical engineering, enzymology and environmental engineering.
Formal definition
Consider a typical balanced chemical reaction:
{\mathit{a}A} + {\mathit{b}B} -> {\mathit{p}P} + {\mathit{q}Q}
The lowercase letters (, , , and ) represent stoichiometric coefficients, while the capital letters represent the reactants ( and ) and the products ( and ).
According to IUPAC's Gold Book definition
the reaction rate for a chemical reaction occurring in a closed system at constant volume, without a build-up of reaction intermediates, is defined as:
where denotes the concentration of the substance or . The reaction rate thus defined has the units of mol/L/s.
The rate of a reaction is always positive. A negative sign is present to indicate that the reactant concentration is decreasing. The IUPAC recommends that the unit of time should always be the second. The rate of reaction differs from the rate of increase of concentration of a product P by a constant factor (the reciprocal of its stoichiometric number) and for a reactant A by minus the reciprocal of the stoichiometric number. The stoichiometric numbers are included so that the defined rate is independent of which reactant or product species is chosen for measurement. For example, if and then is consumed three times more rapidly than , but is uniquely defined. An additional advantage of this definition is that for an elementary and irreversible reaction, is equal to the product of the probability of overcoming the transition state activation energy and the number of times per second the transition state is approached by reactant molecules. When so defined, for an elementary and irreversible reaction, is the rate of successful chemical reaction events leading to the product.
The above definition is only valid for a single reaction, in a closed system of constant volume. If water is added to a pot containing salty water, the concentration of salt decreases, although there is no chemical reaction.
For an open system, the full mass balance must be taken into account:
where
is the inflow rate of in molecules per second;
the outflow;
is the instantaneous reaction rate of (in number concentration rather than molar) in a given differential volume, integrated over the entire system volume at a given moment.
When applied to the closed system at constant volume considered previously, this equation reduces to:
,
where the concentration is related to the number of molecules by Here is the Avogadro constant.
For a single reaction in a closed system of varying volume the so-called rate of conversion can be used, in order to avoid handling concentrations. It is defined as the derivative of the extent of reaction with respect to time.
Here is the stoichiometric coefficient for substance , equal to , , , and in the typical reaction above. Also is the volume of reaction and is the concentration of substance .
When side products or reaction intermediates are formed, the IUPAC recommends the use of the terms the rate of increase of concentration and rate of the decrease of concentration for products and reactants, properly.
Reaction rates may also be defined on a basis that is not the volume of the reactor. When a catalyst is used the reaction rate may be stated on a catalyst weight (mol g−1 s−1) or surface area (mol m−2 s−1) basis. If the basis is a specific catalyst site that may be rigorously counted by a specified method, the rate is given in units of s−1 and is called a turnover frequency.
Influencing factors
Factors that influence the reaction rate are the nature of the reaction, concentration, pressure, reaction order, temperature, solvent, electromagnetic radiation, catalyst, isotopes, surface area, stirring, and diffusion limit. Some reactions are naturally faster than others. The number of reacting species, their physical state (the particles that form solids move much more slowly than those of gases or those in solution), the complexity of the reaction and other factors can greatly influence the rate of a reaction.
Reaction rate increases with concentration, as described by the rate law and explained by collision theory. As reactant concentration increases, the frequency of collision increases. The rate of gaseous reactions increases with pressure, which is, in fact, equivalent to an increase in the concentration of the gas. The reaction rate increases in the direction where there are fewer moles of gas and decreases in the reverse direction. For condensed-phase reactions, the pressure dependence is weak.
The order of the reaction controls how the reactant concentration (or pressure) affects the reaction rate.
Usually conducting a reaction at a higher temperature delivers more energy into the system and increases the reaction rate by causing more collisions between particles, as explained by collision theory. However, the main reason that temperature increases the rate of reaction is that more of the colliding particles will have the necessary activation energy resulting in more successful collisions (when bonds are formed between reactants). The influence of temperature is described by the Arrhenius equation. For example, coal burns in a fireplace in the presence of oxygen, but it does not when it is stored at room temperature. The reaction is spontaneous at low and high temperatures but at room temperature, its rate is so slow that it is negligible. The increase in temperature, as created by a match, allows the reaction to start and then it heats itself because it is exothermic. That is valid for many other fuels, such as methane, butane, and hydrogen.
Reaction rates can be independent of temperature (non-Arrhenius) or decrease with increasing temperature (anti-Arrhenius). Reactions without an activation barrier (for example, some radical reactions), tend to have anti-Arrhenius temperature dependence: the rate constant decreases with increasing temperature.
Many reactions take place in solution and the properties of the solvent affect the reaction rate. The ionic strength also has an effect on the reaction rate.
Electromagnetic radiation is a form of energy. As such, it may speed up the rate or even make a reaction spontaneous as it provides the particles of the reactants with more energy. This energy is in one way or another stored in the reacting particles (it may break bonds, and promote molecules to electronically or vibrationally excited states...) creating intermediate species that react easily. As the intensity of light increases, the particles absorb more energy and hence the rate of reaction increases. For example, when methane reacts with chlorine in the dark, the reaction rate is slow. It can be sped up when the mixture is put under diffused light. In bright sunlight, the reaction is explosive.
The presence of a catalyst increases the reaction rate (in both the forward and reverse reactions) by providing an alternative pathway with a lower activation energy. For example, platinum catalyzes the combustion of hydrogen with oxygen at room temperature.
The kinetic isotope effect consists of a different reaction rate for the same molecule if it has different isotopes, usually hydrogen isotopes, because of the relative mass difference between hydrogen and deuterium.
In reactions on surfaces, which take place, for example, during heterogeneous catalysis, the rate of reaction increases as the surface area does. That is because more particles of the solid are exposed and can be hit by reactant molecules.
Stirring can have a strong effect on the rate of reaction for heterogeneous reactions.
Some reactions are limited by diffusion. All the factors that affect a reaction rate, except for concentration and reaction order, are taken into account in the reaction rate coefficient (the coefficient in the rate equation of the reaction).
Rate equation
For a chemical reaction , the rate equation or rate law is a mathematical expression used in chemical kinetics to link the rate of a reaction to the concentration of each reactant. For a closed system at constant volume, this is often of the form
For reactions that go to completion (which implies very small ), or if only the initial rate is analyzed (with initial vanishing product concentrations), this simplifies to the commonly quoted form
For gas phase reaction the rate equation is often alternatively expressed in terms of partial pressures.
In these equations is the reaction rate coefficient or rate constant, although it is not really a constant, because it includes all the parameters that affect reaction rate, except for time and concentration. Of all the parameters influencing reaction rates, temperature is normally the most important one and is accounted for by the Arrhenius equation.
The exponents and are called reaction orders and depend on the reaction mechanism. For an elementary (single-step) reaction, the order with respect to each reactant is equal to its stoichiometric coefficient. For complex (multistep) reactions, however, this is often not true and the rate equation is determined by the detailed mechanism, as illustrated below for the reaction of H2 and NO.
For elementary reactions or reaction steps, the order and stoichiometric coefficient are both equal to the molecularity or number of molecules participating. For a unimolecular reaction or step, the rate is proportional to the concentration of molecules of reactant, so the rate law is first order. For a bimolecular reaction or step, the number of collisions is proportional to the product of the two reactant concentrations, or second order. A termolecular step is predicted to be third order, but also very slow as simultaneous collisions of three molecules are rare.
By using the mass balance for the system in which the reaction occurs, an expression for the rate of change in concentration can be derived. For a closed system with constant volume, such an expression can look like
Example of a complex reaction: hydrogen and nitric oxide
For the reaction
the observed rate equation (or rate expression) is
As for many reactions, the experimental rate equation does not simply reflect the stoichiometric coefficients in the overall reaction: It is third order overall: first order in H2 and second order in NO, even though the stoichiometric coefficients of both reactants are equal to 2.
In chemical kinetics, the overall reaction rate is often explained using a mechanism consisting of a number of elementary steps. Not all of these steps affect the rate of reaction; normally the slowest elementary step controls the reaction rate. For this example, a possible mechanism is
Reactions 1 and 3 are very rapid compared to the second, so the slow reaction 2 is the rate-determining step. This is a bimolecular elementary reaction whose rate is given by the second-order equation
where is the rate constant for the second step.
However N2O2 is an unstable intermediate whose concentration is determined by the fact that the first step is in equilibrium, so that where is the equilibrium constant of the first step. Substitution of this equation in the previous equation leads to a rate equation expressed in terms of the original reactants
This agrees with the form of the observed rate equation if it is assumed that . In practice the rate equation is used to suggest possible mechanisms which predict a rate equation in agreement with experiment.
The second molecule of H2 does not appear in the rate equation because it reacts in the third step, which is a rapid step after the rate-determining step, so that it does not affect the overall reaction rate.
Temperature dependence
Each reaction rate coefficient has a temperature dependency, which is usually given by the Arrhenius equation:
where
, is the pre-exponential factor or frequency factor,
is the exponential function,
is the activation energy,
is the gas constant.
Since at temperature the molecules have energies given by a Boltzmann distribution, one can expect the number of collisions with energy greater than to be proportional to .
The values for and are dependent on the reaction. There are also more complex equations possible, which describe the temperature dependence of other rate constants that do not follow this pattern.
Temperature is a measure of the average kinetic energy of the reactants. As temperature increases, the kinetic energy of the reactants increases. That is, the particles move faster. With the reactants moving faster this allows more collisions to take place at a greater speed, so the chance of reactants forming into products increases, which in turn results in the rate of reaction increasing. A rise of ten degrees Celsius results in approximately twice the reaction rate.
The minimum kinetic energy required for a reaction to occur is called the activation energy and is denoted by or . The transition state or activated complex shown on the diagram is the energy barrier that must be overcome when changing reactants into products. The molecules with an energy greater than this barrier have enough energy to react.
For a successful collision to take place, the collision geometry must be right, meaning the reactant molecules must face the right way so the activated complex can be formed.
A chemical reaction takes place only when the reacting particles collide. However, not all collisions are effective in causing the reaction. Products are formed only when the colliding particles possess a certain minimum energy called threshold energy. As a rule of thumb, reaction rates for many reactions double for every ten degrees Celsius increase in temperature. For a given reaction, the ratio of its rate constant at a higher temperature to its rate constant at a lower temperature is known as its temperature coefficient,. Q10 is commonly used as the ratio of rate constants that are ten degrees Celsius apart.
Pressure dependence
The pressure dependence of the rate constant for condensed-phase reactions (that is, when reactants and products are solids or liquid) is usually sufficiently weak in the range of pressures normally encountered in industry that it is neglected in practice.
The pressure dependence of the rate constant is associated with the activation volume. For the reaction proceeding through an activation-state complex:
the activation volume, , is:
where denotes the partial molar volume of a species and (a double dagger) indicates the activation-state complex.
For the above reaction, one can expect the change of the reaction rate constant (based either on mole fraction or on molar concentration) with pressure at constant temperature to be:
In practice, the matter can be complicated because the partial molar volumes and the activation volume can themselves be a function of pressure.
Reactions can increase or decrease their rates with pressure, depending on the value of . As an example of the possible magnitude of the pressure effect, some organic reactions were shown to double the reaction rate when the pressure was increased from atmospheric (0.1 MPa) to 50 MPa (which gives −0.025 L/mol).
See also
Diffusion-controlled reaction
Dilution (equation)
Isothermal microcalorimetry
Rate of solution
Steady state approximation
Notes
External links
Chemical kinetics, reaction rate, and order (needs flash player)
Reaction kinetics, examples of important rate laws (lecture with audio).
Rates of reaction
Overview of Bimolecular Reactions (Reactions involving two reactants)
pressure dependence Can. J. Chem.
Chemical kinetics
Chemical reaction engineering
Temporal rates | 0.787694 | 0.995574 | 0.784207 |
Fourier analysis | In mathematics, Fourier analysis is the study of the way general functions may be represented or approximated by sums of simpler trigonometric functions. Fourier analysis grew from the study of Fourier series, and is named after Joseph Fourier, who showed that representing a function as a sum of trigonometric functions greatly simplifies the study of heat transfer.
The subject of Fourier analysis encompasses a vast spectrum of mathematics. In the sciences and engineering, the process of decomposing a function into oscillatory components is often called Fourier analysis, while the operation of rebuilding the function from these pieces is known as Fourier synthesis. For example, determining what component frequencies are present in a musical note would involve computing the Fourier transform of a sampled musical note. One could then re-synthesize the same sound by including the frequency components as revealed in the Fourier analysis. In mathematics, the term Fourier analysis often refers to the study of both operations.
The decomposition process itself is called a Fourier transformation. Its output, the Fourier transform, is often given a more specific name, which depends on the domain and other properties of the function being transformed. Moreover, the original concept of Fourier analysis has been extended over time to apply to more and more abstract and general situations, and the general field is often known as harmonic analysis. Each transform used for analysis (see list of Fourier-related transforms) has a corresponding inverse transform that can be used for synthesis.
To use Fourier analysis, data must be equally spaced. Different approaches have been developed for analyzing unequally spaced data, notably the least-squares spectral analysis (LSSA) methods that use a least squares fit of sinusoids to data samples, similar to Fourier analysis. Fourier analysis, the most used spectral method in science, generally boosts long-periodic noise in long gapped records; LSSA mitigates such problems.
Applications
Fourier analysis has many scientific applications – in physics, partial differential equations, number theory, combinatorics, signal processing, digital image processing, probability theory, statistics, forensics, option pricing, cryptography, numerical analysis, acoustics, oceanography, sonar, optics, diffraction, geometry, protein structure analysis, and other areas.
This wide applicability stems from many useful properties of the transforms:
The transforms are linear operators and, with proper normalization, are unitary as well (a property known as Parseval's theorem or, more generally, as the Plancherel theorem, and most generally via Pontryagin duality).
The transforms are usually invertible.
The exponential functions are eigenfunctions of differentiation, which means that this representation transforms linear differential equations with constant coefficients into ordinary algebraic ones. Therefore, the behavior of a linear time-invariant system can be analyzed at each frequency independently.
By the convolution theorem, Fourier transforms turn the complicated convolution operation into simple multiplication, which means that they provide an efficient way to compute convolution-based operations such as signal filtering, polynomial multiplication, and multiplying large numbers.
The discrete version of the Fourier transform (see below) can be evaluated quickly on computers using fast Fourier transform (FFT) algorithms.
In forensics, laboratory infrared spectrophotometers use Fourier transform analysis for measuring the wavelengths of light at which a material will absorb in the infrared spectrum. The FT method is used to decode the measured signals and record the wavelength data. And by using a computer, these Fourier calculations are rapidly carried out, so that in a matter of seconds, a computer-operated FT-IR instrument can produce an infrared absorption pattern comparable to that of a prism instrument.
Fourier transformation is also useful as a compact representation of a signal. For example, JPEG compression uses a variant of the Fourier transformation (discrete cosine transform) of small square pieces of a digital image. The Fourier components of each square are rounded to lower arithmetic precision, and weak components are eliminated, so that the remaining components can be stored very compactly. In image reconstruction, each image square is reassembled from the preserved approximate Fourier-transformed components, which are then inverse-transformed to produce an approximation of the original image.
In signal processing, the Fourier transform often takes a time series or a function of continuous time, and maps it into a frequency spectrum. That is, it takes a function from the time domain into the frequency domain; it is a decomposition of a function into sinusoids of different frequencies; in the case of a Fourier series or discrete Fourier transform, the sinusoids are harmonics of the fundamental frequency of the function being analyzed.
When a function is a function of time and represents a physical signal, the transform has a standard interpretation as the frequency spectrum of the signal. The magnitude of the resulting complex-valued function at frequency represents the amplitude of a frequency component whose initial phase is given by the angle of (polar coordinates).
Fourier transforms are not limited to functions of time, and temporal frequencies. They can equally be applied to analyze spatial frequencies, and indeed for nearly any function domain. This justifies their use in such diverse branches as image processing, heat conduction, and automatic control.
When processing signals, such as audio, radio waves, light waves, seismic waves, and even images, Fourier analysis can isolate narrowband components of a compound waveform, concentrating them for easier detection or removal. A large family of signal processing techniques consist of Fourier-transforming a signal, manipulating the Fourier-transformed data in a simple way, and reversing the transformation.
Some examples include:
Equalization of audio recordings with a series of bandpass filters;
Digital radio reception without a superheterodyne circuit, as in a modern cell phone or radio scanner;
Image processing to remove periodic or anisotropic artifacts such as jaggies from interlaced video, strip artifacts from strip aerial photography, or wave patterns from radio frequency interference in a digital camera;
Cross correlation of similar images for co-alignment;
X-ray crystallography to reconstruct a crystal structure from its diffraction pattern;
Fourier-transform ion cyclotron resonance mass spectrometry to determine the mass of ions from the frequency of cyclotron motion in a magnetic field;
Many other forms of spectroscopy, including infrared and nuclear magnetic resonance spectroscopies;
Generation of sound spectrograms used to analyze sounds;
Passive sonar used to classify targets based on machinery noise.
Variants of Fourier analysis
(Continuous) Fourier transform
Most often, the unqualified term Fourier transform refers to the transform of functions of a continuous real argument, and it produces a continuous function of frequency, known as a frequency distribution. One function is transformed into another, and the operation is reversible. When the domain of the input (initial) function is time, and the domain of the output (final) function is ordinary frequency, the transform of function at frequency is given by the complex number:
Evaluating this quantity for all values of produces the frequency-domain function. Then can be represented as a recombination of complex exponentials of all possible frequencies:
which is the inverse transform formula. The complex number, conveys both amplitude and phase of frequency
See Fourier transform for much more information, including:
conventions for amplitude normalization and frequency scaling/units
transform properties
tabulated transforms of specific functions
an extension/generalization for functions of multiple dimensions, such as images.
Fourier series
The Fourier transform of a periodic function, with period becomes a Dirac comb function, modulated by a sequence of complex coefficients:
(where is the integral over any interval of length ).
The inverse transform, known as Fourier series, is a representation of in terms of a summation of a potentially infinite number of harmonically related sinusoids or complex exponential functions, each with an amplitude and phase specified by one of the coefficients:
Any can be expressed as a periodic summation of another function, :
and the coefficients are proportional to samples of at discrete intervals of :
Note that any whose transform has the same discrete sample values can be used in the periodic summation. A sufficient condition for recovering (and therefore ) from just these samples (i.e. from the Fourier series) is that the non-zero portion of be confined to a known interval of duration which is the frequency domain dual of the Nyquist–Shannon sampling theorem.
See Fourier series for more information, including the historical development.
Discrete-time Fourier transform (DTFT)
The DTFT is the mathematical dual of the time-domain Fourier series. Thus, a convergent periodic summation in the frequency domain can be represented by a Fourier series, whose coefficients are samples of a related continuous time function:
which is known as the DTFT. Thus the DTFT of the sequence is also the Fourier transform of the modulated Dirac comb function.
The Fourier series coefficients (and inverse transform), are defined by:
Parameter corresponds to the sampling interval, and this Fourier series can now be recognized as a form of the Poisson summation formula. Thus we have the important result that when a discrete data sequence, is proportional to samples of an underlying continuous function, one can observe a periodic summation of the continuous Fourier transform, Note that any with the same discrete sample values produces the same DTFT. But under certain idealized conditions one can theoretically recover and exactly. A sufficient condition for perfect recovery is that the non-zero portion of be confined to a known frequency interval of width When that interval is the applicable reconstruction formula is the Whittaker–Shannon interpolation formula. This is a cornerstone in the foundation of digital signal processing.
Another reason to be interested in is that it often provides insight into the amount of aliasing caused by the sampling process.
Applications of the DTFT are not limited to sampled functions. See Discrete-time Fourier transform for more information on this and other topics, including:
normalized frequency units
windowing (finite-length sequences)
transform properties
tabulated transforms of specific functions
Discrete Fourier transform (DFT)
Similar to a Fourier series, the DTFT of a periodic sequence, with period , becomes a Dirac comb function, modulated by a sequence of complex coefficients (see ):
(where is the sum over any sequence of length )
The sequence is customarily known as the DFT of one cycle of It is also -periodic, so it is never necessary to compute more than coefficients. The inverse transform, also known as a discrete Fourier series, is given by:
where is the sum over any sequence of length
When is expressed as a periodic summation of another function:
and
the coefficients are samples of at discrete intervals of :
Conversely, when one wants to compute an arbitrary number of discrete samples of one cycle of a continuous DTFT, it can be done by computing the relatively simple DFT of as defined above. In most cases, is chosen equal to the length of the non-zero portion of Increasing known as zero-padding or interpolation, results in more closely spaced samples of one cycle of Decreasing causes overlap (adding) in the time-domain (analogous to aliasing), which corresponds to decimation in the frequency domain. (see ) In most cases of practical interest, the sequence represents a longer sequence that was truncated by the application of a finite-length window function or FIR filter array.
The DFT can be computed using a fast Fourier transform (FFT) algorithm, which makes it a practical and important transformation on computers.
See Discrete Fourier transform for much more information, including:
transform properties
applications
tabulated transforms of specific functions
Summary
For periodic functions, both the Fourier transform and the DTFT comprise only a discrete set of frequency components (Fourier series), and the transforms diverge at those frequencies. One common practice (not discussed above) is to handle that divergence via Dirac delta and Dirac comb functions. But the same spectral information can be discerned from just one cycle of the periodic function, since all the other cycles are identical. Similarly, finite-duration functions can be represented as a Fourier series, with no actual loss of information except that the periodicity of the inverse transform is a mere artifact.
It is common in practice for the duration of s(•) to be limited to the period, or . But these formulas do not require that condition.
Symmetry properties
When the real and imaginary parts of a complex function are decomposed into their even and odd parts, there are four components, denoted below by the subscripts RE, RO, IE, and IO. And there is a one-to-one mapping between the four components of a complex time function and the four components of its complex frequency transform:
From this, various relationships are apparent, for example:
The transform of a real-valued function is the conjugate symmetric function Conversely, a conjugate symmetric transform implies a real-valued time-domain.
The transform of an imaginary-valued function is the conjugate antisymmetric function and the converse is true.
The transform of a conjugate symmetric function is the real-valued function and the converse is true.
The transform of a conjugate antisymmetric function is the imaginary-valued function and the converse is true.
History
An early form of harmonic series dates back to ancient Babylonian mathematics, where they were used to compute ephemerides (tables of astronomical positions).
The Classical Greek concepts of deferent and epicycle in the Ptolemaic system of astronomy were related to Fourier series (see ).
In modern times, variants of the discrete Fourier transform were used by Alexis Clairaut in 1754 to compute an orbit,
which has been described as the first formula for the DFT,
and in 1759 by Joseph Louis Lagrange, in computing the coefficients of a trigonometric series for a vibrating string. Technically, Clairaut's work was a cosine-only series (a form of discrete cosine transform), while Lagrange's work was a sine-only series (a form of discrete sine transform); a true cosine+sine DFT was used by Gauss in 1805 for trigonometric interpolation of asteroid orbits.
Euler and Lagrange both discretized the vibrating string problem, using what would today be called samples.
An early modern development toward Fourier analysis was the 1770 paper Réflexions sur la résolution algébrique des équations by Lagrange, which in the method of Lagrange resolvents used a complex Fourier decomposition to study the solution of a cubic:
Lagrange transformed the roots into the resolvents:
where is a cubic root of unity, which is the DFT of order 3.
A number of authors, notably Jean le Rond d'Alembert, and Carl Friedrich Gauss used trigonometric series to study the heat equation, but the breakthrough development was the 1807 paper Mémoire sur la propagation de la chaleur dans les corps solides by Joseph Fourier, whose crucial insight was to model all functions by trigonometric series, introducing the Fourier series.
Historians are divided as to how much to credit Lagrange and others for the development of Fourier theory: Daniel Bernoulli and Leonhard Euler had introduced trigonometric representations of functions, and Lagrange had given the Fourier series solution to the wave equation, so Fourier's contribution was mainly the bold claim that an arbitrary function could be represented by a Fourier series.
The subsequent development of the field is known as harmonic analysis, and is also an early instance of representation theory.
The first fast Fourier transform (FFT) algorithm for the DFT was discovered around 1805 by Carl Friedrich Gauss when interpolating measurements of the orbit of the asteroids Juno and Pallas, although that particular FFT algorithm is more often attributed to its modern rediscoverers Cooley and Tukey.
Time–frequency transforms
In signal processing terms, a function (of time) is a representation of a signal with perfect time resolution, but no frequency information, while the Fourier transform has perfect frequency resolution, but no time information.
As alternatives to the Fourier transform, in time–frequency analysis, one uses time–frequency transforms to represent signals in a form that has some time information and some frequency information – by the uncertainty principle, there is a trade-off between these. These can be generalizations of the Fourier transform, such as the short-time Fourier transform, the Gabor transform or fractional Fourier transform (FRFT), or can use different functions to represent signals, as in wavelet transforms and chirplet transforms, with the wavelet analog of the (continuous) Fourier transform being the continuous wavelet transform.
Fourier transforms on arbitrary locally compact abelian topological groups
The Fourier variants can also be generalized to Fourier transforms on arbitrary locally compact Abelian topological groups, which are studied in harmonic analysis; there, the Fourier transform takes functions on a group to functions on the dual group. This treatment also allows a general formulation of the convolution theorem, which relates Fourier transforms and convolutions. See also the Pontryagin duality for the generalized underpinnings of the Fourier transform.
More specific, Fourier analysis can be done on cosets, even discrete cosets.
See also
Conjugate Fourier series
Generalized Fourier series
Fourier–Bessel series
Fourier-related transforms
Laplace transform (LT)
Two-sided Laplace transform
Mellin transform
Non-uniform discrete Fourier transform (NDFT)
Quantum Fourier transform (QFT)
Number-theoretic transform
Basis vectors
Bispectrum
Characteristic function (probability theory)
Orthogonal functions
Schwartz space
Spectral density
Spectral density estimation
Spectral music
Walsh function
Wavelet
Notes
References
Further reading
External links
Tables of Integral Transforms at EqWorld: The World of Mathematical Equations.
An Intuitive Explanation of Fourier Theory by Steven Lehar.
Lectures on Image Processing: A collection of 18 lectures in pdf format from Vanderbilt University. Lecture 6 is on the 1- and 2-D Fourier Transform. Lectures 7–15 make use of it., by Alan Peters
Introduction to Fourier analysis of time series at Medium
Integral transforms
Digital signal processing
Mathematical physics
Mathematics of computing
Time series
Joseph Fourier
Acoustics | 0.786419 | 0.997181 | 0.784203 |
Nernst equation | In electrochemistry, the Nernst equation is a chemical thermodynamical relationship that permits the calculation of the reduction potential of a reaction (half-cell or full cell reaction) from the standard electrode potential, absolute temperature, the number of electrons involved in the redox reaction, and activities (often approximated by concentrations) of the chemical species undergoing reduction and oxidation respectively. It was named after Walther Nernst, a German physical chemist who formulated the equation.
Expression
General form with chemical activities
When an oxidizer accepts a number z of electrons to be converted in its reduced form, the half-reaction is expressed as:
Ox + ze- -> Red
The reaction quotient, also often called the ion activity product (IAP), is the ratio between the chemical activities (a) of the reduced form (the reductant, ) and the oxidized form (the oxidant, ). The chemical activity of a dissolved species corresponds to its true thermodynamic concentration taking into account the electrical interactions between all ions present in solution at elevated concentrations. For a given dissolved species, its chemical activity (a) is the product of its activity coefficient (γ) by its molar (mol/L solution), or molal (mol/kg water), concentration (C): a = γ C. So, if the concentration (C, also denoted here below with square brackets [ ]) of all the dissolved species of interest are sufficiently low and that their activity coefficients are close to unity, their chemical activities can be approximated by their concentrations as commonly done when simplifying, or idealizing, a reaction for didactic purposes:
At chemical equilibrium, the ratio of the activity of the reaction product (aRed) by the reagent activity (aOx) is equal to the equilibrium constant of the half-reaction:
The standard thermodynamics also says that the actual Gibbs free energy is related to the free energy change under standard state by the relationship:
where is the reaction quotient and R is the universal ideal gas constant.
The cell potential associated with the electrochemical reaction is defined as the decrease in Gibbs free energy per coulomb of charge transferred, which leads to the relationship The constant (the Faraday constant) is a unit conversion factor , where is the Avogadro constant and is the fundamental electron charge. This immediately leads to the Nernst equation, which for an electrochemical half-cell is
For a complete electrochemical reaction (full cell), the equation can be written as
where:
is the half-cell reduction potential at the temperature of interest,
is the standard half-cell reduction potential,
is the cell potential (electromotive force) at the temperature of interest,
is the standard cell potential,
is the universal ideal gas constant: ,
is the temperature in kelvins,
is the number of electrons transferred in the cell reaction or half-reaction,
is the Faraday constant, the magnitude of charge (in coulombs) per mole of electrons: ,
is the reaction quotient of the cell reaction, and,
is the chemical activity for the relevant species, where is the activity of the reduced form and is the activity of the oxidized form.
Thermal voltage
At room temperature (25 °C), the thermal voltage is approximately 25.693 mV. The Nernst equation is frequently expressed in terms of base-10 logarithms (i.e., common logarithms) rather than natural logarithms, in which case it is written:
where λ = ln(10) ≈ 2.3026 and λVT ≈ 0.05916 Volt.
Form with activity coefficients and concentrations
Similarly to equilibrium constants, activities are always measured with respect to the standard state (1 mol/L for solutes, 1 atm for gases, and T = 298.15 K, i.e., 25 °C or 77 °F). The chemical activity of a species , , is related to the measured concentration via the relationship , where is the activity coefficient of the species . Because activity coefficients tend to unity at low concentrations, or are unknown or difficult to determine at medium and high concentrations, activities in the Nernst equation are frequently replaced by simple concentrations and then, formal standard reduction potentials used.
Taking into account the activity coefficients the Nernst equation becomes:
Where the first term including the activity coefficients is denoted and called the formal standard reduction potential, so that can be directly expressed as a function of and the concentrations in the simplest form of the Nernst equation:
Formal standard reduction potential
When wishing to use simple concentrations in place of activities, but that the activity coefficients are far from unity and can no longer be neglected and are unknown or too difficult to determine, it can be convenient to introduce the notion of the "so-called" standard formal reduction potential which is related to the standard reduction potential as follows:
So that the Nernst equation for the half-cell reaction can be correctly formally written in terms of concentrations as:
and likewise for the full cell expression.
According to Wenzel (2020), a formal reduction potential is the reduction potential that applies to a half reaction under a set of specified conditions such as, e.g., pH, ionic strength, or the concentration of complexing agents.
The formal reduction potential is often a more convenient, but conditional, form of the standard reduction potential, taking into account activity coefficients and specific conditions characteristics of the reaction medium. Therefore, its value is a conditional value, i.e., that it depends on the experimental conditions and because the ionic strength affects the activity coefficients, will vary from medium to medium. Several definitions of the formal reduction potential can be found in the literature, depending on the pursued objective and the experimental constraints imposed by the studied system. The general definition of refers to its value determined when . A more particular case is when is also determined at pH 7, as e.g. for redox reactions important in biochemistry or biological systems.
Determination of the formal standard reduction potential when 1
The formal standard reduction potential can be defined as the measured reduction potential of the half-reaction at unity concentration ratio of the oxidized and reduced species (i.e., when 1) under given conditions.
Indeed:
as, , when ,
, when ,
because , and that the term is included in .
The formal reduction potential makes possible to more simply work with molar (mol/L, M) or molal (mol/kg , m) concentrations in place of activities. Because molar and molal concentrations were once referred as formal concentrations, it could explain the origin of the adjective formal in the expression formal potential.
The formal potential is thus the reversible potential of an electrode at equilibrium immersed in a solution where reactants and products are at unit concentration. If any small incremental change of potential causes a change in the direction of the reaction, i.e. from reduction to oxidation or vice versa, the system is close to equilibrium, reversible and is at its formal potential. When the formal potential is measured under standard conditions (i.e. the activity of each dissolved species is 1 mol/L, T = 298.15 K = 25 °C = 77 °F, = 1 bar) it becomes de facto a standard potential. According to Brown and Swift (1949):
"A formal potential is defined as the potential of a half-cell, measured against the standard hydrogen electrode, when the total concentration of each oxidation state is one formal".
In this case, as for the standard reduction potentials, the concentrations of dissolved species remain equal to one molar (M) or one molal (m), and so are said to be one formal (F). So, expressing the concentration in molarity (1 mol/L):
The term formal concentration (F) is now largely ignored in the current literature and can be commonly assimilated to molar concentration (M), or molality (m) in case of thermodynamic calculations.
The formal potential is also found halfway between the two peaks in a cyclic voltammogram, where at this point the concentration of Ox (the oxidized species) and Red (the reduced species) at the electrode surface are equal.
The activity coefficients and are included in the formal potential , and because they depend on experimental conditions such as temperature, ionic strength, and pH, cannot be referred as an immutable standard potential but needs to be systematically determined for each specific set of experimental conditions.
Formal reduction potentials are applied to simplify calculations of a considered system under given conditions and measurements interpretation. The experimental conditions in which they are determined and their relationship to the standard reduction potentials must be clearly described to avoid to confuse them with standard reduction potentials.
Formal standard reduction potential at pH 7
Formal standard reduction potentials are also commonly used in biochemistry and cell biology for referring to standard reduction potentials measured at pH 7, a value closer to the pH of most physiological and intracellular fluids than the standard state pH of 0. The advantage is to defining a more appropriate redox scale better corresponding to real conditions than the standard state. Formal standard reduction potentials allow to more easily estimate if a redox reaction supposed to occur in a metabolic process or to fuel microbial activity under some conditions is feasible or not.
While, standard reduction potentials always refer to the standard hydrogen electrode (SHE), with [] = 1 M corresponding to a pH 0, and fixed arbitrarily to zero by convention, it is no longer the case at a pH of 7. Then, the reduction potential of a hydrogen electrode operating at pH 7 is -0.413 V with respect to the standard hydrogen electrode (SHE).
Expression of the Nernst equation as a function of pH
The and pH of a solution are related by the Nernst equation as commonly represented by a Pourbaix diagram . explicitly denotes expressed versus the standard hydrogen electrode (SHE). For a half cell equation, conventionally written as a reduction reaction (i.e., electrons accepted by an oxidant on the left side):
The half-cell standard reduction potential is given by
where is the standard Gibbs free energy change, is the number of electrons involved, and is the Faraday's constant. The Nernst equation relates pH and as follows:
where curly brackets indicate activities, and exponents are shown in the conventional manner. This equation is the equation of a straight line for as a function of pH with a slope of volt (pH has no units).
This equation predicts lower at higher pH values. This is observed for the reduction of O2 into H2O, or OH−, and for the reduction of H+ into H2. is then often noted as to indicate that it refers to the standard hydrogen electrode (SHE) whose = 0 by convention under standard conditions (T = 298.15 K = 25 °C = 77 F, Pgas = 1 atm (1.013 bar), concentrations = 1 M and thus pH = 0).
Main factors affecting the formal standard reduction potentials
The main factor affecting the formal reduction potentials in biochemical or biological processes is most often the pH. To determine approximate values of formal reduction potentials, neglecting in a first approach changes in activity coefficients due to ionic strength, the Nernst equation has to be applied taking care to first express the relationship as a function of pH. The second factor to be considered are the values of the concentrations taken into account in the Nernst equation. To define a formal reduction potential for a biochemical reaction, the pH value, the concentrations values and the hypotheses made on the activity coefficients must always be explicitly indicated. When using, or comparing, several formal reduction potentials they must also be internally consistent.
Problems may occur when mixing different sources of data using different conventions or approximations (i.e., with different underlying hypotheses). When working at the frontier between inorganic and biological processes (e.g., when comparing abiotic and biotic processes in geochemistry when microbial activity could also be at work in the system), care must be taken not to inadvertently directly mix standard reduction potentials versus SHE (pH = 0) with formal reduction potentials (pH = 7). Definitions must be clearly expressed and carefully controlled, especially if the sources of data are different and arise from different fields (e.g., picking and mixing data from classical electrochemistry and microbiology textbooks without paying attention to the different conventions on which they are based).
Examples with a Pourbaix diagram
To illustrate the dependency of the reduction potential on pH, one can simply consider the two oxido-reduction equilibria determining the water stability domain in a Pourbaix diagram . When water is submitted to electrolysis by applying a sufficient difference of electrical potential between two electrodes immersed in water, hydrogen is produced at the cathode (reduction of water protons) while oxygen is formed at the anode (oxidation of water oxygen atoms). The same may occur if a reductant stronger than hydrogen (e.g., metallic Na) or an oxidant stronger than oxygen (e.g., F2) enters in contact with water and reacts with it. In the here beside (the simplest possible version of a Pourbaix diagram), the water stability domain (grey surface) is delimited in term of redox potential by two inclined red dashed lines:
Lower stability line with hydrogen gas evolution due to the proton reduction at very low Eh:
(cathode: reduction)
Higher stability line with oxygen gas evolution due to water oxygen oxidation at very high Eh:
(anode: oxidation)
When solving the Nernst equation for each corresponding reduction reaction (need to revert the water oxidation reaction producing oxygen), both equations have a similar form because the number of protons and the number of electrons involved within a reaction are the same and their ratio is one (2/2 for H2 and 4/4 with respectively), so it simplifies when solving the Nernst equation expressed as a function of pH.
The result can be numerically expressed as follows:
Note that the slopes of the two water stability domain upper and lower lines are the same (-59.16 mV/pH unit), so they are parallel on a Pourbaix diagram. As the slopes are negative, at high pH, both hydrogen and oxygen evolution requires a much lower reduction potential than at low pH.
For the reduction of H+ into H2 the here above mentioned relationship becomes:
because by convention = 0 V for the standard hydrogen electrode (SHE: pH = 1). So, at pH = 7, = -0.414 V for the reduction of protons.
For the reduction of O2 into 2 H2O the here above mentioned relationship becomes:
because = +1.229 V with respect to the standard hydrogen electrode (SHE: pH = 1). So, at pH = 7, = +0.815 V for the reduction of oxygen.
The offset of -414 mV in is the same for both reduction reactions because they share the same linear relationship as a function of pH and the slopes of their lines are the same. This can be directly verified on a Pourbaix diagram. For other reduction reactions, the value of the formal reduction potential at a pH of 7, commonly referred for biochemical reactions, also depends on the slope of the corresponding line in a Pourbaix diagram i.e. on the ratio of the number of to the number of involved in the reduction reaction, and thus on the stoichiometry of the half-reaction. The determination of the formal reduction potential at pH = 7 for a given biochemical half-reaction requires thus to calculate it with the corresponding Nernst equation as a function of pH. One cannot simply apply an offset of -414 mV to the Eh value (SHE) when the ratio differs from 1.
Applications in biology
Beside important redox reactions in biochemistry and microbiology, the Nernst equation is also used in physiology for calculating the electric potential of a cell membrane with respect to one type of ion. It can be linked to the acid dissociation constant.
Nernst potential
The Nernst equation has a physiological application when used to calculate the potential of an ion of charge across a membrane. This potential is determined using the concentration of the ion both inside and outside the cell:
When the membrane is in thermodynamic equilibrium (i.e., no net flux of ions), and if the cell is permeable to only one ion, then the membrane potential must be equal to the Nernst potential for that ion.
Goldman equation
When the membrane is permeable to more than one ion, as is inevitably the case, the resting potential can be determined from the Goldman equation, which is a solution of G-H-K influx equation under the constraints that total current density driven by electrochemical force is zero:
where
is the membrane potential (in volts, equivalent to joules per coulomb),
is the permeability for that ion (in meters per second),
is the extracellular concentration of that ion (in moles per cubic meter, to match the other SI units, though the units strictly don't matter, as the ion concentration terms become a dimensionless ratio),
is the intracellular concentration of that ion (in moles per cubic meter),
is the ideal gas constant (joules per kelvin per mole),
is the temperature in kelvins,
is the Faraday's constant (coulombs per mole).
The potential across the cell membrane that exactly opposes net diffusion of a particular ion through the membrane is called the Nernst potential for that ion. As seen above, the magnitude of the Nernst potential is determined by the ratio of the concentrations of that specific ion on the two sides of the membrane. The greater this ratio the greater the tendency for the ion to diffuse in one direction, and therefore the greater the Nernst potential required to prevent the diffusion. A similar expression exists that includes (the absolute value of the transport ratio). This takes transporters with unequal exchanges into account. See: sodium-potassium pump where the transport ratio would be 2/3, so r equals 1.5 in the formula below. The reason why we insert a factor r = 1.5 here is that current density by electrochemical force Je.c.(Na+) + Je.c.(K+) is no longer zero, but rather Je.c.(Na+) + 1.5Je.c.(K+) = 0 (as for both ions flux by electrochemical force is compensated by that by the pump, i.e. Je.c. = −Jpump), altering the constraints for applying GHK equation. The other variables are the same as above. The following example includes two ions: potassium (K+) and sodium (Na+). Chloride is assumed to be in equilibrium.
When chloride (Cl−) is taken into account,
Derivation
Using Boltzmann factor
For simplicity, we will consider a solution of redox-active molecules that undergo a one-electron reversible reaction
and that have a standard potential of zero, and in which the activities are well represented by the concentrations (i.e. unit activity coefficient). The chemical potential of this solution is the difference between the energy barriers for taking electrons from and for giving electrons to the working electrode that is setting the solution's electrochemical potential. The ratio of oxidized to reduced molecules, , is equivalent to the probability of being oxidized (giving electrons) over the probability of being reduced (taking electrons), which we can write in terms of the Boltzmann factor for these processes:
Taking the natural logarithm of both sides gives
If at = 1, we need to add in this additional constant:
Dividing the equation by to convert from chemical potentials to electrode potentials, and remembering that , we obtain the Nernst equation for the one-electron process :
Using thermodynamics (chemical potential)
Quantities here are given per molecule, not per mole, and so Boltzmann constant and the electron charge are used instead of the gas constant and Faraday's constant . To convert to the molar quantities given in most chemistry textbooks, it is simply necessary to multiply by the Avogadro constant: and . The entropy of a molecule is defined as
where is the number of states available to the molecule. The number of states must vary linearly with the volume of the system (here an idealized system is considered for better understanding, so that activities are posited very close to the true concentrations. Fundamental statistical proof of the mentioned linearity goes beyond the scope of this section, but to see this is true it is simpler to consider usual isothermal process for an ideal gas where the change of entropy takes place. It follows from the definition of entropy and from the condition of constant temperature and quantity of gas that the change in the number of states must be proportional to the relative change in volume . In this sense there is no difference in statistical properties of ideal gas atoms compared with the dissolved species of a solution with activity coefficients equaling one: particles freely "hang around" filling the provided volume), which is inversely proportional to the concentration , so we can also write the entropy as
The change in entropy from some state 1 to another state 2 is therefore
so that the entropy of state 2 is
If state 1 is at standard conditions, in which is unity (e.g., 1 atm or 1 M), it will merely cancel the units of . We can, therefore, write the entropy of an arbitrary molecule A as
where is the entropy at standard conditions and [A] denotes the concentration of A. The change in entropy for a reaction
is then given by
We define the ratio in the last term as the reaction quotient:
where the numerator is a product of reaction product activities, , each raised to the power of a stoichiometric coefficient, , and the denominator is a similar product of reactant activities. All activities refer to a time . Under certain circumstances (see chemical equilibrium) each activity term such as may be replaced by a concentration term, [A].In an electrochemical cell, the cell potential is the chemical potential available from redox reactions. is related to the Gibbs free energy change only by a constant:
, where is the number of electrons transferred and is the Faraday constant. There is a negative sign because a spontaneous reaction has a negative Gibbs free energy and a positive potential . The Gibbs free energy is related to the entropy by , where is the enthalpy and is the temperature of the system. Using these relations, we can now write the change in Gibbs free energy,
and the cell potential,
This is the more general form of the Nernst equation.
For the redox reaction ,
and we have:
The cell potential at standard temperature and pressure (STP) is often replaced by the formal potential , which includes the activity coefficients of the dissolved species under given experimental conditions (T, P, ionic strength, pH, and complexing agents) and is the potential that is actually measured in an electrochemical cell.
Relation to the chemical equilibrium
The standard Gibbs free energy is related to the equilibrium constant as follows:
At the same time, is also equal to the product of the total charge transferred during the reaction and the cell potential:
The sign is negative, because the considered system performs the work and thus releases energy.
So,
And therefore:
Starting from the Nernst equation, one can also demonstrate the same relationship in the reverse way.
At chemical equilibrium, or thermodynamic equilibrium, the electrochemical potential and therefore the reaction quotient attains the special value known as the equilibrium constant:
Therefore,
Or at standard state,
We have thus related the standard electrode potential and the equilibrium constant of a redox reaction.
Limitations
In dilute solutions, the Nernst equation can be expressed directly in the terms of concentrations (since activity coefficients are close to unity). But at higher concentrations, the true activities of the ions must be used. This complicates the use of the Nernst equation, since estimation of non-ideal activities of ions generally requires experimental measurements. The Nernst equation also only applies when there is no net current flow through the electrode. The activity of ions at the electrode surface changes when there is current flow, and there are additional overpotential and resistive loss terms which contribute to the measured potential.
At very low concentrations of the potential-determining ions, the potential predicted by Nernst equation approaches toward . This is physically meaningless because, under such conditions, the exchange current density becomes very low, and there may be no thermodynamic equilibrium necessary for Nernst equation to hold. The electrode is called unpoised in such case. Other effects tend to take control of the electrochemical behavior of the system, like the involvement of the solvated electron in electricity transfer and electrode equilibria, as analyzed by Alexander Frumkin and B. Damaskin, Sergio Trasatti, etc.
Time dependence of the potential
The expression of time dependence has been established by Karaoglanoff.
Significance in other scientific fields
The Nernst equation has been involved in the scientific controversy about cold fusion. Fleischmann and Pons, claiming that cold fusion could exist, calculated that a palladium cathode immersed in a heavy water electrolysis cell could achieve up to 1027 atmospheres of pressure inside the crystal lattice of the metal of the cathode, enough pressure to cause spontaneous nuclear fusion. In reality, only 10,000–20,000 atmospheres were achieved. The American physicist John R. Huizenga claimed their original calculation was affected by a misinterpretation of the Nernst equation. He cited a paper about Pd–Zr alloys.
The Nernst equation allows the calculation of the extent of reaction between two redox systems and can be used, for example, to assess whether a particular reaction will go to completion or not. At chemical equilibrium, the electromotive forces (emf) of the two half cells are equal. This allows the equilibrium constant of the reaction to be calculated and hence the extent of the reaction.
See also
Concentration cell
Dependency of reduction potential on pH
Electrode potential
Galvanic cell
Goldman equation
Membrane potential
Nernst–Planck equation
Pourbaix diagram
Reduction potential
Solvated electron
Standard electrode potential
Standard electrode potential (data page)
Standard apparent reduction potentials in biochemistry at pH 7 (data page)
References
External links
Nernst/Goldman Equation Simulator
Nernst Equation Calculator
Interactive Nernst/Goldman Java Applet
DoITPoMS Teaching and Learning Package- "The Nernst Equation and Pourbaix Diagrams"
Walther Nernst
Electrochemical equations
Eponymous equations of physics | 0.786021 | 0.997683 | 0.784199 |
Metamodeling | A metamodel is a model of a model, and metamodeling is the process of generating such metamodels. Thus metamodeling or meta-modeling is the analysis, construction, and development of the frames, rules, constraints, models, and theories applicable and useful for modeling a predefined class of problems. As its name implies, this concept applies the notions of meta- and modeling in software engineering and systems engineering. Metamodels are of many types and have diverse applications.
Overview
A metamodel/ surrogate model is a model of the model, i.e. a simplified model of an actual model of a circuit, system, or software-like entity. Metamodel can be a mathematical relation or algorithm representing input and output relations. A model is an abstraction of phenomena in the real world; a metamodel is yet another abstraction, highlighting the properties of the model itself. A model conforms to its metamodel in the way that a computer program conforms to the grammar of the programming language in which it is written. Various types of metamodels include polynomial equations, neural networks, Kriging, etc. "Metamodeling" is the construction of a collection of "concepts" (things, terms, etc.) within a certain domain. Metamodeling typically involves studying the output and input relationships and then fitting the right metamodels to represent that behavior.
Common uses for metamodels are:
As a schema for semantic data that needs to be exchanged or stored
As a language that supports a particular method or process
As a language to express additional semantics of existing information
As a mechanism to create tools that work with a broad class of models at run time
As a schema for modeling and automatically exploring sentences of a language with applications to automated test synthesis
As an approximation of a higher-fidelity model for use when reducing time, cost, or computational effort is necessary
Because of the "meta" character of metamodeling, both the praxis and theory of metamodels are of relevance to metascience, metaphilosophy, metatheories and systemics, and meta-consciousness. The concept can be useful in mathematics, and has practical applications in computer science and computer engineering/software engineering. The latter are the main focus of this article.
Topics
Definition
In software engineering, the use of models is an alternative to more common code-based development techniques. A model always conforms to a unique metamodel. One of the currently most active branches of Model Driven Engineering is the approach named model-driven architecture proposed by OMG. This approach is embodied in the Meta Object Facility (MOF) specification.
Typical metamodelling specifications proposed by OMG are UML, SysML, SPEM or CWM. ISO has also published the standard metamodel ISO/IEC 24744. All the languages presented below could be defined as MOF metamodels.
Metadata modeling
Metadata modeling is a type of metamodeling used in software engineering and systems engineering for the analysis and construction of models applicable and useful to some predefined class of problems. (see also: data modeling).
Model transformations
One important move in model-driven engineering is the systematic use of model transformation languages. The OMG has proposed a standard for this called QVT for Queries/Views/Transformations. QVT is based on the meta-object facility (MOF). Among many other model transformation languages (MTLs), some examples of implementations of this standard are AndroMDA, VIATRA, Tefkat, MT, ManyDesigns Portofino.
Relationship to ontologies
Meta-models are closely related to ontologies. Both are often used to describe and analyze the relations between concepts:
Ontologies: express something meaningful within a specified universe or domain of discourse by utilizing grammar for using vocabulary. The grammar specifies what it means to be a well-formed statement, assertion, query, etc. (formal constraints) on how terms in the ontology’s controlled vocabulary can be used together.
Meta-modeling: can be considered as an explicit description (constructs and rules) of how a domain-specific model is built. In particular, this comprises a formalized specification of the domain-specific notations. Typically, metamodels are – and always should follow - a strict rule set. "A valid metamodel is an ontology, but not all ontologies are modeled explicitly as metamodels."
Types of metamodels
For software engineering, several types of models (and their corresponding modeling activities) can be distinguished:
Metadata modeling (MetaData model)
Meta-process modeling (MetaProcess model)
Executable meta-modeling (combining both of the above and much more, as in the general purpose tool Kermeta)
Model transformation language (see below)
Polynomial metamodels
Neural network metamodels
Kriging metamodels
Piecewise polynomial (spline) metamodels
Gradient-enhanced kriging (GEK)
Zoos of metamodels
A library of similar metamodels has been called a Zoo of metamodels.
There are several types of meta-model zoos. Some are expressed in ECore. Others are written in MOF 1.4 – XMI 1.2. The metamodels expressed in UML-XMI1.2 may be uploaded in Poseidon for UML, a UML CASE tool.
See also
Business reference model
Data governance
Model-driven engineering (MDE)
Model-driven architecture (MDA)
Domain-specific language (DSL)
Domain-specific modeling (DSM)
Generic Eclipse Modeling System (GEMS)
Kermeta (Kernel Meta-modeling)
Metadata
MetaCASE tool (tools for creating tools for computer-aided software engineering tools)
Method engineering
MODAF Meta-Model
MOF Queries/Views/Transformations (MOF QVT)
Object Process Methodology
Requirements analysis
Space mapping
Surrogate model
Transformation language
VIATRA (Viatra)
XML transformation language (XML TL)
References
Further reading
Booch, G., Rumbaugh, J., Jacobson, I. (1999), The Unified Modeling Language User Guide, Redwood City, CA: Addison Wesley Longman Publishing Co., Inc.
J. P. van Gigch, System Design Modeling and Metamodeling, Plenum Press, New York, 1991
Gopi Bulusu, hamara.in, 2004 Model Driven Transformation
P. C. Smolik, Mambo Metamodeling Environment, Doctoral Thesis, Brno University of Technology. 2006
Gonzalez-Perez, C. and B. Henderson-Sellers, 2008. Metamodelling for Software Engineering. Chichester (UK): Wiley. 210 p.
M.A. Jeusfeld, M. Jarke, and J. Mylopoulos, 2009. Metamodeling for Method Engineering. Cambridge (USA): The MIT Press. 424 p. , Open access via http://conceptbase.sourceforge.net/2021_Metamodeling_for_Method_Engineering.pdf
G. Caplat Modèles & Métamodèles, 2008 -
Fill, H.-G., Karagiannis, D., 2013. On the Conceptualisation of Modelling Methods Using the ADOxx Meta Modelling Platform, Enterprise Modelling and Information Systems Architectures, Vol. 8, Issue 1, 4-25.
Software design
Scientific modelling | 0.798453 | 0.982134 | 0.784188 |
Syntrophy | In biology, syntrophy, syntrophism, or cross-feeding (from Greek syn meaning together, trophe meaning nourishment) is the cooperative interaction between at least two microbial species to degrade a single substrate. This type of biological interaction typically involves the transfer of one or more metabolic intermediates between two or more metabolically diverse microbial species living in close proximity to each other. Thus, syntrophy can be considered an obligatory interdependency and a mutualistic metabolism between different microbial species, wherein the growth of one partner depends on the nutrients, growth factors, or substrates provided by the other(s).
Microbial syntrophy
Syntrophy is often used synonymously for mutualistic symbiosis especially between at least two different bacterial species. Syntrophy differs from symbiosis in a way that syntrophic relationship is primarily based on closely linked metabolic interactions to maintain thermodynamically favorable lifestyle in a given environment. Syntrophy plays an important role in a large number of microbial processes especially in oxygen limited environments, methanogenic environments and anaerobic systems. In anoxic or methanogenic environments such as wetlands, swamps, paddy fields, landfills, digestive tract of ruminants, and anerobic digesters syntrophy is employed to overcome the energy constraints as the reactions in these environments proceed close to thermodynamic equilibrium.
Mechanism of microbial syntrophy
The main mechanism of syntrophy is removing the metabolic end products of one species so as to create an energetically favorable environment for another species. This obligate metabolic cooperation is required to facilitate the degradation of complex organic substrates under anaerobic conditions. Complex organic compounds such as ethanol, propionate, butyrate, and lactate cannot be directly used as substrates for methanogenesis by methanogens. On the other hand, fermentation of these organic compounds cannot occur in fermenting microorganisms unless the hydrogen concentration is reduced to a low level by the methanogens. The key mechanism that ensures the success of syntrophy is interspecies electron transfer. The interspecies electron transfer can be carried out via three ways: interspecies hydrogen transfer, interspecies formate transfer and interspecies direct electron transfer. Reverse electron transport is prominent in syntrophic metabolism.
The metabolic reactions and the energy involved for syntrophic degradation with H2 consumption:
A classical syntrophic relationship can be illustrated by the activity of ‘Methanobacillus omelianskii’. It was isolated several times from anaerobic sediments and sewage sludge and was regarded as a pure culture of an anaerobe converting ethanol to acetate and methane. In fact, however, the culture turned out to consist of a methanogenic archaeon "organism M.o.H" and a Gram-negative Bacterium "Organism S" which involves the oxidization of ethanol into acetate and methane mediated by interspecies hydrogen transfer. Individuals of organism S are observed as obligate anaerobic bacteria that use ethanol as an electron donor, whereas M.o.H are methanogens that oxidize hydrogen gas to produce methane.
Organism S: 2 Ethanol + 2 H2O → 2 Acetate− + 2 H+ + 4 H2 (ΔG°' = +9.6 kJ per reaction)
Strain M.o.H.: 4 H2 + CO2 → Methane + 2 H2O (ΔG°' = -131 kJ per reaction)
Co-culture:2 Ethanol + CO2 → 2 Acetate− + 2 H+ + Methane (ΔG°' = -113 kJ per reaction)
The oxidization of ethanol by organism S is made possible thanks to the methanogen M.o.H, which consumes the hydrogen produced by organism S, by turning the positive Gibbs free energy into negative Gibbs free energy. This situation favors growth of organism S and also provides energy for methanogens by consuming hydrogen. Down the line, acetate accumulation is also prevented by similar syntrophic relationship. Syntrophic degradation of substrates like butyrate and benzoate can also happen without hydrogen consumption.
An example of propionate and butyrate degradation with interspecies formate transfer carried out by the mutual system of Syntrophomonas wolfei and Methanobacterium formicicum:
Propionate+2H2O+2CO2 → Acetate- +3Formate- +3H+ (ΔG°'=+65.3 kJ/mol)
Butyrate+2H2O+2CO2 → 2Acetate- +3Formate- +3H+ ΔG°'=+38.5 kJ/mol)
Direct interspecies electron transfer (DIET) which involves electron transfer without any electron carrier such as H2 or formate was reported in the co-culture system of Geobacter mettalireducens and Methanosaeto or Methanosarcina
Examples
In ruminants
The defining feature of ruminants, such as cows and goats, is a stomach called a rumen. The rumen contains billions of microbes, many of which are syntrophic. Some anaerobic fermenting microbes in the rumen (and other gastrointestinal tracts) are capable of degrading organic matter to short chain fatty acids, and hydrogen. The accumulating hydrogen inhibits the microbe's ability to continue degrading organic matter, but the presence of syntrophic hydrogen-consuming microbes allows continued growth by metabolizing the waste products. In addition, fermentative bacteria gain maximum energy yield when protons are used as electron acceptor with concurrent H2 production. Hydrogen-consuming organisms include methanogens, sulfate-reducers, acetogens, and others.
Some fermentation products, such as fatty acids longer than two carbon atoms, alcohols longer than one carbon atom, and branched chain and aromatic fatty acids, cannot directly be used in methanogenesis. In acetogenesis processes, these products are oxidized to acetate and H2 by obligated proton reducing bacteria in syntrophic relationship with methanogenic archaea as low H2 partial pressure is essential for acetogenic reactions to be thermodynamically favorable (ΔG < 0).
Biodegradation of pollutants
Syntrophic microbial food webs play an integral role in bioremediation especially in environments contaminated with crude oil and petrol. Environmental contamination with oil is of high ecological importance and can be effectively mediated through syntrophic degradation by complete mineralization of alkane, aliphatic and hydrocarbon chains. The hydrocarbons of the oil are broken down after activation by fumarate, a chemical compound that is regenerated by other microorganisms. Without regeneration, the microbes degrading the oil would eventually run out of fumarate and the process would cease. This breakdown is crucial in the processes of bioremediation and global carbon cycling.
Syntrophic microbial communities are key players in the breakdown of aromatic compounds, which are common pollutants. The degradation of aromatic benzoate to methane produces intermediate compounds such as formate, acetate, and H2. The buildup of these products makes benzoate degradation thermodynamically unfavorable. These intermediates can be metabolized syntrophically by methanogens and makes the degradation process thermodynamically favorable
Degradation of amino acids
Studies have shown that bacterial degradation of amino acids can be significantly enhanced through the process of syntrophy. Microbes growing poorly on amino acid substrates alanine, aspartate, serine, leucine, valine, and glycine can have their rate of growth dramatically increased by syntrophic H2 scavengers. These scavengers, like Methanospirillum and Acetobacterium, metabolize the H2 waste produced during amino acid breakdown, preventing a toxic build-up. Another way to improve amino acid breakdown is through interspecies electron transfer mediated by formate. Species like Desulfovibrio employ this method. Amino acid fermenting anaerobes such as Clostridium species, Peptostreptococcus asacchaarolyticus, Acidaminococcus fermentans were known to breakdown amino acids like glutamate with the help of hydrogen scavenging methanogenic partners without going through the usual Stickland fermentation pathway
Anaerobic digestion
Effective syntrophic cooperation between propionate oxidizing bacteria, acetate oxidizing bacteria and H2/acetate consuming methanogens is necessary to successfully carryout anaerobic digestion to produce biomethane
Examples of syntrophic organisms
Syntrophomonas wolfei
Syntrophobacter funaroxidans
Pelotomaculum thermopropinicium
Syntrophus aciditrophicus
Syntrophus buswellii
Syntrophus gentianae
References
Biological interactions
Food chains | 0.805978 | 0.972776 | 0.784037 |
Structural functionalism | Structural functionalism, or simply functionalism, is "a framework for building theory that sees society as a complex system whose parts work together to promote solidarity and stability".
This approach looks at society through a macro-level orientation, which is a broad focus on the social structures that shape society as a whole, and believes that society has evolved like organisms. This approach looks at both social structure and social functions. Functionalism addresses society as a whole in terms of the function of its constituent elements; namely norms, customs, traditions, and institutions.
A common analogy called the organic or biological analogy, popularized by Herbert Spencer, presents these parts of society as human body "organs" that work toward the proper functioning of the "body" as a whole. In the most basic terms, it simply emphasizes "the effort to impute, as rigorously as possible, to each feature, custom, or practice, its effect on the functioning of a supposedly stable, cohesive system". For Talcott Parsons, "structural-functionalism" came to describe a particular stage in the methodological development of social science, rather than a specific school of thought.
Theory
In sociology, classical theories are defined by a tendency towards biological analogy and notions of social evolutionism:
While one may regard functionalism as a logical extension of the organic analogies for societies presented by political philosophers such as Rousseau, sociology draws firmer attention to those institutions unique to industrialized capitalist society (or modernity).
Auguste Comte believed that society constitutes a separate "level" of reality, distinct from both biological and inorganic matter. Explanations of social phenomena had therefore to be constructed within this level, individuals being merely transient occupants of comparatively stable social roles. In this view, Comte was followed by Émile Durkheim.
A central concern for Durkheim was the question of how certain societies maintain internal stability and survive over time. He proposed that such societies tend to be segmented, with equivalent parts held together by shared values, common symbols or (as his nephew Marcel Mauss held), systems of exchanges. Durkheim used the term "mechanical solidarity" to refer to these types of "social bonds, based on common sentiments and shared moral values, that are strong among members of pre-industrial societies". In modern, complex societies, members perform very different tasks, resulting in a strong interdependence. Based on the metaphor above of an organism in which many parts function together to sustain the whole, Durkheim argued that complex societies are held together by "organic solidarity", i.e. "social bonds, based on specialization and interdependence, that are strong among members of industrial societies".
The central concern of structural functionalism may be regarded as a continuation of the Durkheimian task of explaining the apparent stability and internal cohesion needed by societies to endure over time. Societies are seen as coherent, bounded and fundamentally relational constructs that function like organisms, with their various (or social institutions) working together in an unconscious, quasi-automatic fashion toward achieving an overall social equilibrium. All social and cultural phenomena are therefore seen as functional in the sense of working together, and are effectively deemed to have "lives" of their own. They are primarily analyzed in terms of this function. The individual is significant not in and of themselves, but rather in terms of their status, their position in patterns of social relations, and the behaviours associated with their status. Therefore, the social structure is the network of statuses connected by associated roles.
Functionalism also has an anthropological basis in the work of theorists such as Marcel Mauss, Bronisław Malinowski and Radcliffe-Brown. The prefix 'structural' emerged in Radcliffe-Brown's specific usage. Radcliffe-Brown proposed that most stateless, "primitive" societies, lacking strong centralized institutions, are based on an association of corporate-descent groups, i.e. the respective society's recognised kinship groups. Structural functionalism also took on Malinowski's argument that the basic building block of society is the nuclear family, and that the clan is an outgrowth, not vice versa.
It is simplistic to equate the perspective directly with political conservatism. The tendency to emphasize "cohesive systems", however, leads functionalist theories to be contrasted with "conflict theories" which instead emphasize social problems and inequalities.
Prominent theorists
Auguste Comte
Auguste Comte, the "Father of Positivism", pointed out the need to keep society unified as many traditions were diminishing. He was the first person to coin the term sociology. Comte suggests that sociology is the product of a three-stage development:
Theological stage: From the beginning of human history until the end of the European Middle Ages, people took a religious view that society expressed God's will. In the theological state, the human mind, seeking the essential nature of beings, the first and final causes (the origin and purpose) of all effects—in short, absolute knowledge—supposes all phenomena to be produced by the immediate action of supernatural beings.
Metaphysical stage: People began seeing society as a natural system as opposed to the supernatural. This began with enlightenment and the ideas of Hobbes, Locke, and Rousseau. Perceptions of society reflected the failings of a selfish human nature rather than the perfection of God.
Positive or scientific stage: Describing society through the application of the scientific approach, which draws on the work of scientists.
Herbert Spencer
Herbert Spencer (1820–1903) was a British philosopher famous for applying the theory of natural selection to society. He was in many ways the first true sociological functionalist. In fact, while Durkheim is widely considered the most important functionalist among positivist theorists, it is known that much of his analysis was culled from reading Spencer's work, especially his Principles of Sociology (1874–96). In describing society, Spencer alludes to the analogy of a human body. Just as the structural parts of the human body—the skeleton, muscles, and various internal organs—function independently to help the entire organism survive, social structures work together to preserve society.
While reading Spencer's massive volumes can be tedious (long passages explicating the organic analogy, with reference to cells, simple organisms, animals, humans and society), there are some important insights that have quietly influenced many contemporary theorists, including Talcott Parsons, in his early work The Structure of Social Action (1937). Cultural anthropology also consistently uses functionalism.
This evolutionary model, unlike most 19th century evolutionary theories, is cyclical, beginning with the differentiation and increasing complication of an organic or "super-organic" (Spencer's term for a social system) body, followed by a fluctuating state of equilibrium and disequilibrium (or a state of adjustment and adaptation), and, finally, the stage of disintegration or dissolution. Following Thomas Malthus' population principles, Spencer concluded that society is constantly facing selection pressures (internal and external) that force it to adapt its internal structure through differentiation.
Every solution, however, causes a new set of selection pressures that threaten society's viability. Spencer was not a determinist in the sense that he never said that
Selection pressures will be felt in time to change them;
They will be felt and reacted to; or
The solutions will always work.
In fact, he was in many ways a political sociologist, and recognized that the degree of centralized and consolidated authority in a given polity could make or break its ability to adapt. In other words, he saw a general trend towards the centralization of power as leading to stagnation and ultimately, pressures to decentralize.
More specifically, Spencer recognized three functional needs or prerequisites that produce selection pressures: they are regulatory, operative (production) and distributive. He argued that all societies need to solve problems of control and coordination, production of goods, services and ideas, and, finally, to find ways of distributing these resources.
Initially, in tribal societies, these three needs are inseparable, and the kinship system is the dominant structure that satisfies them. As many scholars have noted, all institutions are subsumed under kinship organization, but, with increasing population (both in terms of sheer numbers and density), problems emerge with regard to feeding individuals, creating new forms of organization—consider the emergent division of labour—coordinating and controlling various differentiated social units, and developing systems of resource distribution.
The solution, as Spencer sees it, is to differentiate structures to fulfill more specialized functions; thus, a chief or "big man" emerges, soon followed by a group of lieutenants, and later kings and administrators. The structural parts of society (e.g. families, work) function interdependently to help society function. Therefore, social structures work together to preserve society.
Talcott Parsons
Talcott Parsons began writing in the 1930s and contributed to sociology, political science, anthropology, and psychology. Structural functionalism and Parsons have received much criticism. Numerous critics have pointed out Parsons' underemphasis of political and monetary struggle, the basics of social change, and the by and large "manipulative" conduct unregulated by qualities and standards. Structural functionalism, and a large portion of Parsons' works, appear to be insufficient in their definitions concerning the connections amongst institutionalized and non-institutionalized conduct, and the procedures by which institutionalization happens.
Parsons was heavily influenced by Durkheim and Max Weber, synthesizing much of their work into his action theory, which he based on the system-theoretical concept and the methodological principle of voluntary action. He held that "the social system is made up of the actions of individuals". His starting point, accordingly, is the interaction between two individuals faced with a variety of choices about how they might act, choices that are influenced and constrained by a number of physical and social factors.
Parsons determined that each individual has expectations of the other's action and reaction to their own behavior, and that these expectations would (if successful) be "derived" from the accepted norms and values of the society they inhabit. As Parsons himself emphasized, in a general context there would never exist any perfect "fit" between behaviors and norms, so such a relation is never complete or "perfect".
Social norms were always problematic for Parsons, who never claimed (as has often been alleged) that social norms were generally accepted and agreed upon, should this prevent some kind of universal law. Whether social norms were accepted or not was for Parsons simply a historical question.
As behaviors are repeated in more interactions, and these expectations are entrenched or institutionalized, a role is created. Parsons defines a "role" as the normatively-regulated participation "of a person in a concrete process of social interaction with specific, concrete role-partners". Although any individual, theoretically, can fulfill any role, the individual is expected to conform to the norms governing the nature of the role they fulfill.
Furthermore, one person can and does fulfill many different roles at the same time. In one sense, an individual can be seen to be a "composition" of the roles he inhabits. Certainly, today, when asked to describe themselves, most people would answer with reference to their societal roles.
Parsons later developed the idea of roles into collectivities of roles that complement each other in fulfilling functions for society. Some roles are bound up in institutions and social structures (economic, educational, legal and even gender-based). These are functional in the sense that they assist society in operating and fulfilling its functional needs so that society runs smoothly.
Contrary to prevailing myth, Parsons never spoke about a society where there was no conflict or some kind of "perfect" equilibrium. A society's cultural value-system was in the typical case never completely integrated, never static and most of the time, like in the case of the American society, in a complex state of transformation relative to its historical point of departure. To reach a "perfect" equilibrium was not any serious theoretical question in Parsons analysis of social systems, indeed, the most dynamic societies had generally cultural systems with important inner tensions like the US and India. These tensions were a source of their strength according to Parsons rather than the opposite. Parsons never thought about system-institutionalization and the level of strains (tensions, conflict) in the system as opposite forces per se.
The key processes for Parsons for system reproduction are socialization and social control. Socialization is important because it is the mechanism for transferring the accepted norms and values of society to the individuals within the system. Parsons never spoke about "perfect socialization"in any society socialization was only partial and "incomplete" from an integral point of view.
Parsons states that "this point ... is independent of the sense in which [the] individual is concretely autonomous or creative rather than 'passive' or 'conforming', for individuality and creativity, are to a considerable extent, phenomena of the institutionalization of expectations"; they are culturally constructed.
Socialization is supported by the positive and negative sanctioning of role behaviours that do or do not meet these expectations. A punishment could be informal, like a snigger or gossip, or more formalized, through institutions such as prisons and mental homes. If these two processes were perfect, society would become static and unchanging, but in reality, this is unlikely to occur for long.
Parsons recognizes this, stating that he treats "the structure of the system as problematic and subject to change", and that his concept of the tendency towards equilibrium "does not imply the empirical dominance of stability over change". He does, however, believe that these changes occur in a relatively smooth way.
Individuals in interaction with changing situations adapt through a process of "role bargaining". Once the roles are established, they create norms that guide further action and are thus institutionalized, creating stability across social interactions. Where the adaptation process cannot adjust, due to sharp shocks or immediate radical change, structural dissolution occurs and either new structures (or therefore a new system) are formed, or society dies. This model of social change has been described as a "moving equilibrium", and emphasizes a desire for social order.
Davis and Moore
Kingsley Davis and Wilbert E. Moore (1945) gave an argument for social stratification based on the idea of "functional necessity" (also known as the Davis-Moore hypothesis). They argue that the most difficult jobs in any society have the highest incomes in order to motivate individuals to fill the roles needed by the division of labour. Thus, inequality serves social stability.
This argument has been criticized as fallacious from a number of different angles: the argument is both that the individuals who are the most deserving are the highest rewarded, and that a system of unequal rewards is necessary, otherwise no individuals would perform as needed for the society to function. The problem is that these rewards are supposed to be based upon objective merit, rather than subjective "motivations." The argument also does not clearly establish why some positions are worth more than others, even when they benefit more people in society, e.g., teachers compared to athletes and movie stars. Critics have suggested that structural inequality (inherited wealth, family power, etc.) is itself a cause of individual success or failure, not a consequence of it.
Robert Merton
Robert K. Merton made important refinements to functionalist thought. He fundamentally agreed with Parsons' theory but acknowledged that Parsons' theory could be questioned, believing that it was over generalized. Merton tended to emphasize middle range theory rather than a grand theory, meaning that he was able to deal specifically with some of the limitations in Parsons' thinking. Merton believed that any social structure probably has many functions, some more obvious than others. He identified three main limitations: functional unity, universal functionalism and indispensability. He also developed the concept of deviance and made the distinction between manifest and latent functions. Manifest functions referred to the recognized and intended consequences of any social pattern. Latent functions referred to unrecognized and unintended consequences of any social pattern.
Merton criticized functional unity, saying that not all parts of a modern complex society work for the functional unity of society. Consequently, there is a social dysfunction referred to as any social pattern that may disrupt the operation of society. Some institutions and structures may have other functions, and some may even be generally dysfunctional, or be functional for some while being dysfunctional for others. This is because not all structures are functional for society as a whole. Some practices are only functional for a dominant individual or a group.
There are two types of functions that Merton discusses the "manifest functions" in that a social pattern can trigger a recognized and intended consequence. The manifest function of education includes preparing for a career by getting good grades, graduation and finding good job. The second type of function is "latent functions", where a social pattern results in an unrecognized or unintended consequence. The latent functions of education include meeting new people, extra-curricular activities, school trips.
Another type of social function is "social dysfunction" which is any undesirable consequences that disrupts the operation of society. The social dysfunction of education includes not getting good grades, a job. Merton states that by recognizing and examining the dysfunctional aspects of society we can explain the development and persistence of alternatives. Thus, as Holmwood states, "Merton explicitly made power and conflict central issues for research within a functionalist paradigm."
Merton also noted that there may be functional alternatives to the institutions and structures currently fulfilling the functions of society. This means that the institutions that currently exist are not indispensable to society. Merton states "just as the same item may have multiple functions, so may the same function be diversely fulfilled by alternative items." This notion of functional alternatives is important because it reduces the tendency of functionalism to imply approval of the status quo.
Merton's theory of deviance is derived from Durkheim's idea of anomie. It is central in explaining how internal changes can occur in a system. For Merton, anomie means a discontinuity between cultural goals and the accepted methods available for reaching them.
Merton believes that there are 5 situations facing an actor.
Conformity occurs when an individual has the means and desire to achieve the cultural goals socialized into them.
Innovation occurs when an individual strives to attain the accepted cultural goals but chooses to do so in novel or unaccepted method.
Ritualism occurs when an individual continues to do things as prescribed by society but forfeits the achievement of the goals.
Retreatism is the rejection of both the means and the goals of society.
Rebellion is a combination of the rejection of societal goals and means and a substitution of other goals and means.
Thus it can be seen that change can occur internally in society through either innovation or rebellion. It is true that society will attempt to control these individuals and negate the changes, but as the innovation or rebellion builds momentum, society will eventually adapt or face dissolution.
Almond and Powell
In the 1970s, political scientists Gabriel Almond and Bingham Powell introduced a structural-functionalist approach to comparing political systems. They argued that, in order to understand a political system, it is necessary to understand not only its institutions (or structures) but also their respective functions. They also insisted that these institutions, to be properly understood, must be placed in a meaningful and dynamic historical context.
This idea stood in marked contrast to prevalent approaches in the field of comparative politics—the state-society theory and the dependency theory. These were the descendants of David Easton's system theory in international relations, a mechanistic view that saw all political systems as essentially the same, subject to the same laws of "stimulus and response"—or inputs and outputs—while paying little attention to unique characteristics. The structural-functional approach is based on the view that a political system is made up of several key components, including interest groups, political parties and branches of government.
In addition to structures, Almond and Powell showed that a political system consists of various functions, chief among them political socialization, recruitment and communication: socialization refers to the way in which societies pass along their values and beliefs to succeeding generations, and in political terms describe the process by which a society inculcates civic virtues, or the habits of effective citizenship; recruitment denotes the process by which a political system generates interest, engagement and participation from citizens; and communication refers to the way that a system promulgates its values and information.
Unilineal descent
In their attempt to explain the social stability of African "primitive" stateless societies where they undertook their fieldwork, Evans-Pritchard (1940) and Meyer Fortes (1945) argued that the Tallensi and the Nuer were primarily organized around unilineal descent groups. Such groups are characterized by common purposes, such as administering property or defending against attacks; they form a permanent social structure that persists well beyond the lifespan of their members. In the case of the Tallensi and the Nuer, these corporate groups were based on kinship which in turn fitted into the larger structures of unilineal descent; consequently Evans-Pritchard's and Fortes' model is called "descent theory". Moreover, in this African context territorial divisions were aligned with lineages; descent theory therefore synthesized both blood and soil as the same. Affinal ties with the parent through whom descent is not reckoned, however, are considered to be merely complementary or secondary (Fortes created the concept of "complementary filiation"), with the reckoning of kinship through descent being considered the primary organizing force of social systems. Because of its strong emphasis on unilineal descent, this new kinship theory came to be called "descent theory".
With no delay, descent theory had found its critics. Many African tribal societies seemed to fit this neat model rather well, although Africanists, such as Paul Richards, also argued that Fortes and Evans-Pritchard had deliberately downplayed internal contradictions and overemphasized the stability of the local lineage systems and their significance for the organization of society. However, in many Asian settings the problems were even more obvious. In Papua New Guinea, the local patrilineal descent groups were fragmented and contained large amounts of non-agnates. Status distinctions did not depend on descent, and genealogies were too short to account for social solidarity through identification with a common ancestor. In particular, the phenomenon of cognatic (or bilateral) kinship posed a serious problem to the proposition that descent groups are the primary element behind the social structures of "primitive" societies.
Leach's (1966) critique came in the form of the classical Malinowskian argument, pointing out that "in Evans-Pritchard's studies of the Nuer and also in Fortes's studies of the Tallensi unilineal descent turns out to be largely an ideal concept to which the empirical facts are only adapted by means of fictions". People's self-interest, manoeuvring, manipulation and competition had been ignored. Moreover, descent theory neglected the significance of marriage and affinal ties, which were emphasized by Lévi-Strauss's structural anthropology, at the expense of overemphasizing the role of descent. To quote Leach: "The evident importance attached to matrilateral and affinal kinship connections is not so much explained as explained away."
Biological
Biological functionalism is an anthropological paradigm, asserting that all social institutions, beliefs, values and practices serve to address pragmatic concerns. In many ways, the theorem derives from the longer-established structural functionalism, yet the two theorems diverge from one another significantly. While both maintain the fundamental belief that a social structure is composed of many interdependent frames of reference, biological functionalists criticise the structural view that a social solidarity and collective conscience is required in a functioning system. By that fact, biological functionalism maintains that our individual survival and health is the driving provocation of actions, and that the importance of social rigidity is negligible.
Everyday application
Although the actions of humans without doubt do not always engender positive results for the individual, a biological functionalist would argue that the intention was still self-preservation, albeit unsuccessful. An example of this is the belief in luck as an entity; while a disproportionately strong belief in good luck may lead to undesirable results, such as a huge loss in money from gambling, biological functionalism maintains that the newly created ability of the gambler to condemn luck will allow them to be free of individual blame, thus serving a practical and individual purpose. In this sense, biological functionalism maintains that while bad results often occur in life, which do not serve any pragmatic concerns, an entrenched cognitive psychological motivation was attempting to create a positive result, in spite of its eventual failure.
Decline
Structural functionalism reached the peak of its influence in the 1940s and 1950s, and by the 1960s was in rapid decline. By the 1980s, its place was taken in Europe by more conflict-oriented approaches, and more recently by structuralism. While some of the critical approaches also gained popularity in the United States, the mainstream of the discipline has instead shifted to a myriad of empirically oriented middle-range theories with no overarching theoretical orientation. To most sociologists, functionalism is now "as dead as a dodo".
As the influence of functionalism in the 1960s began to wane, the linguistic and cultural turns led to a myriad of new movements in the social sciences: "According to Giddens, the orthodox consensus terminated in the late 1960s and 1970s as the middle ground shared by otherwise competing perspectives gave way and was replaced by a baffling variety of competing perspectives. This third generation of social theory includes phenomenologically inspired approaches, critical theory, ethnomethodology, symbolic interactionism, structuralism, post-structuralism, and theories written in the tradition of hermeneutics and ordinary language philosophy."
While absent from empirical sociology, functionalist themes remained detectable in sociological theory, most notably in the works of Luhmann and Giddens. There are, however, signs of an incipient revival, as functionalist claims have recently been bolstered by developments in multilevel selection theory and in empirical research on how groups solve social dilemmas. Recent developments in evolutionary theory—especially by biologist David Sloan Wilson and anthropologists Robert Boyd and Peter Richerson—have provided strong support for structural functionalism in the form of multilevel selection theory. In this theory, culture and social structure are seen as a Darwinian (biological or cultural) adaptation at the group level.
Criticisms
In the 1960s, functionalism was criticized for being unable to account for social change, or for structural contradictions and conflict (and thus was often called "consensus theory"). Also, it ignores inequalities including race, gender, class, which cause tension and conflict. The refutation of the second criticism of functionalism, that it is static and has no concept of change, has already been articulated above, concluding that while Parsons' theory allows for change, it is an orderly process of change [Parsons, 1961:38], a moving equilibrium. Therefore, referring to Parsons' theory of society as static is inaccurate. It is true that it does place emphasis on equilibrium and the maintenance or quick return to social order, but this is a product of the time in which Parsons was writing (post-World War II, and the start of the cold war). Society was in upheaval and fear abounded. At the time social order was crucial, and this is reflected in Parsons' tendency to promote equilibrium and social order rather than social change.
Furthermore, Durkheim favoured a radical form of guild socialism along with functionalist explanations. Also, Marxism, while acknowledging social contradictions, still uses functionalist explanations. Parsons' evolutionary theory describes the differentiation and reintegration systems and subsystems and thus at least temporary conflict before reintegration (ibid). "The fact that functional analysis can be seen by some as inherently conservative and by others as inherently radical suggests that it may be inherently neither one nor the other."
Stronger criticisms include the epistemological argument that functionalism is tautologous, that is, it attempts to account for the development of social institutions solely through recourse to the effects that are attributed to them, and thereby explains the two circularly. However, Parsons drew directly on many of Durkheim's concepts in creating his theory. Certainly Durkheim was one of the first theorists to explain a phenomenon with reference to the function it served for society. He said, "the determination of function is…necessary for the complete explanation of the phenomena." However Durkheim made a clear distinction between historical and functional analysis, saying, "When ... the explanation of a social phenomenon is undertaken, we must seek separately the efficient cause which produces it and the function it fulfills." If Durkheim made this distinction, then it is unlikely that Parsons did not.
However Merton does explicitly state that functional analysis does not seek to explain why the action happened in the first instance, but why it continues or is reproduced. By this particular logic, it can be argued that functionalists do not necessarily explain the original cause of a phenomenon with reference to its effect. Yet the logic stated in reverse, that social phenomena are (re)produced because they serve ends, is unoriginal to functionalist thought. Thus functionalism is either undefinable or it can be defined by the teleological arguments which functionalist theorists normatively produced before Merton.
Another criticism describes the ontological argument that society cannot have "needs" as a human being does, and even if society does have needs they need not be met. Anthony Giddens argues that functionalist explanations may all be rewritten as historical accounts of individual human actions and consequences (see Structuration).
A further criticism directed at functionalism is that it contains no sense of agency, that individuals are seen as puppets, acting as their role requires. Yet Holmwood states that the most sophisticated forms of functionalism are based on "a highly developed concept of action," and as was explained above, Parsons took as his starting point the individual and their actions. His theory did not however articulate how these actors exercise their agency in opposition to the socialization and inculcation of accepted norms. As has been shown above, Merton addressed this limitation through his concept of deviance, and so it can be seen that functionalism allows for agency. It cannot, however, explain why individuals choose to accept or reject the accepted norms, why and in what circumstances they choose to exercise their agency, and this does remain a considerable limitation of the theory.
Further criticisms have been levelled at functionalism by proponents of other social theories, particularly conflict theorists, Marxists, feminists and postmodernists. Conflict theorists criticized functionalism's concept of systems as giving far too much weight to integration and consensus, and neglecting independence and conflict. Lockwood, in line with conflict theory, suggested that Parsons' theory missed the concept of system contradiction. He did not account for those parts of the system that might have tendencies to mal-integration. According to Lockwood, it was these tendencies that come to the surface as opposition and conflict among actors. However Parsons thought that the issues of conflict and cooperation were very much intertwined and sought to account for both in his model. In this however he was limited by his analysis of an ‘ideal type' of society which was characterized by consensus. Merton, through his critique of functional unity, introduced into functionalism an explicit analysis of tension and conflict. Yet Merton's functionalist explanations of social phenomena continued to rest on the idea that society is primarily co-operative rather than conflicted, which differentiates Merton from conflict theorists.
Marxism, which was revived soon after the emergence of conflict theory, criticized professional sociology (functionalism and conflict theory alike) for being partisan to advanced welfare capitalism. Gouldner thought that Parsons' theory specifically was an expression of the dominant interests of welfare capitalism, that it justified institutions with reference to the function they fulfill for society. It may be that Parsons' work implied or articulated that certain institutions were necessary to fulfill the functional prerequisites of society, but whether or not this is the case, Merton explicitly states that institutions are not indispensable and that there are functional alternatives. That he does not identify any alternatives to the current institutions does reflect a conservative bias, which as has been stated before is a product of the specific time that he was writing in.
As functionalism's prominence was ending, feminism was on the rise, and it attempted a radical criticism of functionalism. It believed that functionalism neglected the suppression of women within the family structure. Holmwood shows, however, that Parsons did in fact describe the situations where tensions and conflict existed or were about to take place, even if he did not articulate those conflicts. Some feminists agree, suggesting that Parsons provided accurate descriptions of these situations. On the other hand, Parsons recognized that he had oversimplified his functional analysis of women in relation to work and the family, and focused on the positive functions of the family for society and not on its dysfunctions for women. Merton, too, although addressing situations where function and dysfunction occurred simultaneously, lacked a "feminist sensibility".
Postmodernism, as a theory, is critical of claims of objectivity. Therefore, the idea of grand theory and grand narrative that can explain society in all its forms is treated with skepticism. This critique focuses on exposing the danger that grand theory can pose when not seen as a limited perspective, as one way of understanding society.
Jeffrey Alexander (1985) sees functionalism as a broad school rather than a specific method or system, such as Parsons, who is capable of taking equilibrium (stability) as a reference-point rather than assumption and treats structural differentiation as a major form of social change. The name 'functionalism' implies a difference of method or interpretation that does not exist. This removes the determinism criticized above. Cohen argues that rather than needs a society has dispositional facts: features of the social environment that support the existence of particular social institutions but do not cause them.
Influential theorists
Kingsley Davis
Michael Denton
Émile Durkheim
David Keen
Niklas Luhmann
Bronisław Malinowski
Robert K. Merton
Wilbert E. Moore
George Murdock
Talcott Parsons
Alfred Reginald Radcliffe-Brown
Herbert Spencer
Fei Xiaotong
See also
Causation (sociology)
Functional structuralism
Historicism
Neofunctionalism (sociology)
New institutional economics
Pure sociology
Sociotechnical system
Systems theory
Vacancy chain
Dennis Wrong (critic of structural functionalism)
Notes
References
Barnard, A. 2000. History and Theory in Anthropology. Cambridge: CUP.
Barnard, A., and Good, A. 1984. Research Practices in the Study of Kinship. London: Academic Press.
Barnes, J. 1971. Three Styles in the Study of Kinship. London: Butler & Tanner.
Elster, J., (1990), “Merton's Functionalism and the Unintended Consequences of Action”, in Clark, J., Modgil, C. & Modgil, S., (eds) Robert Merton: Consensus and Controversy, Falmer Press, London, pp. 129–35
Gingrich, P., (1999) “Functionalism and Parsons” in Sociology 250 Subject Notes, University of Regina, accessed, 24/5/06, uregina.ca
Holy, L. 1996. Anthropological Perspectives on Kinship. London: Pluto Press.
Homans, George Casper (1962). Sentiments and Activities. New York: The Free Press of Glencoe.
Hoult, Thomas Ford (1969). Dictionary of Modern Sociology.
Kuper, A. 1996. Anthropology and Anthropologists. London: Routledge.
Layton, R. 1997. An Introduction to Theory in Anthropology. Cambridge: CUP.
Leach, E. 1954. Political Systems of Highland Burma. London: Bell.
Leach, E. 1966. Rethinking Anthropology. Northampton: Dickens.
Lenski, Gerhard (1966). "Power and Privilege: A Theory of Social Stratification." New York: McGraw-Hill.
Lenski, Gerhard (2005). "Evolutionary-Ecological Theory." Boulder, CO: Paradigm.
Levi-Strauss, C. 1969. The Elementary Structures of Kinship. London: Eyre and Spottis-woode.
Maryanski, Alexandra (1998). "Evolutionary Sociology." Advances in Human Ecology. 7:1–56.
Maryanski, Alexandra and Jonathan Turner (1992). "The Social Cage: Human Nature and the Evolution of Society." Stanford: Stanford University Press.
Marshall, Gordon (1994). The Concise Oxford Dictionary of Sociology.
Parsons, T., (1961) Theories of Society: foundations of modern sociological theory, Free Press, New York
Perey, Arnold (2005) "Malinowski, His Diary, and Men Today (with a note on the nature of Malinowskian functionalism)
Ritzer, George and Douglas J. Goodman (2004). Sociological Theory, 6th ed. New York: McGraw-Hill.
Sanderson, Stephen K. (1999). "Social Transformations: A General Theory of Historical Development." Lanham, MD: Rowman & Littlefield.
Turner, Jonathan (1995). "Macrodynamics: Toward a Theory on the Organization of Human Populations." New Brunswick: Rutgers University Press.
Turner, Jonathan and Jan Stets (2005). "The Sociology of Emotions." Cambridge. Cambridge University Press.
Comparative politics
Functionalism (social theory)
History of sociology
Sociological theories
Anthropology
Cognition | 0.785817 | 0.997675 | 0.78399 |
Condensation reaction | In organic chemistry, a condensation reaction is a type of chemical reaction in which two molecules are combined to form a single molecule, usually with the loss of a small molecule such as water. If water is lost, the reaction is also known as a dehydration synthesis. However other molecules can also be lost, such as ammonia, ethanol, acetic acid and hydrogen sulfide.
The addition of the two molecules typically proceeds in a step-wise fashion to the addition product, usually in equilibrium, and with loss of a water molecule (hence the name condensation). The reaction may otherwise involve the functional groups of the molecule, and is a versatile class of reactions that can occur in acidic or basic conditions or in the presence of a catalyst. This class of reactions is a vital part of life as it is essential to the formation of peptide bonds between amino acids and to the biosynthesis of fatty acids.
Many variations of condensation reactions exist. Common examples include the aldol condensation and the Knoevenagel condensation, which both form water as a by-product, as well as the Claisen condensation and the Dieckman condensation (intramolecular Claisen condensation), which form alcohols as by-products.
Synthesis of prebiotic molecules
Condensation reactions likely played major roles in the synthesis of the first biotic molecules including early peptides and nucleic acids. In fact, condensation reactions would be required at multiple steps in RNA oligomerization: the condensation of nucleobases and sugars, nucleoside phosphorylation, and nucleotide polymerization.
See also
Anabolism
Hydrolysis, the opposite of a condensation reaction
Condensed tannins
References | 0.789896 | 0.992512 | 0.783981 |
Structure–activity relationship | The structure–activity relationship (SAR) is the relationship between the chemical structure of a molecule and its biological activity. This idea was first presented by Alexander Crum Brown and Thomas Richard Fraser at least as early as 1868.
The analysis of SAR enables the determination of the chemical group responsible for evoking a target biological effect in the organism. This allows modification of the effect or the potency of a bioactive compound (typically a drug) by changing its chemical structure. Medicinal chemists use the techniques of chemical synthesis to insert new chemical groups into the biomedical compound and test the modifications for their biological effects.
This method was refined to build mathematical relationships between the chemical structure and the biological activity, known as quantitative structure–activity relationships (QSAR). A related term is structure affinity relationship (SAFIR).
Structure-biodegradability relationship
The large number of synthetic organic chemicals currently in production presents a major challenge for timely collection of detailed environmental data on each compound. The concept of structure biodegradability relationships (SBR) has been applied to explain variability in persistence among organic chemicals in the environment. Early attempts generally consisted of examining the degradation of a homologous series of structurally related compounds under identical conditions with a complex "universal" inoculum, typically derived from numerous sources. This approach revealed that the nature and positions of substituents affected the apparent biodegradability of several chemical classes, with resulting general themes, such as halogens generally conferring persistence under aerobic conditions. Subsequently, more quantitative approaches have been developed using principles of QSAR and often accounting for the role of sorption (bioavailability) in chemical fate.
See also
Combinatorial chemistry
Congener
Conformation activity relationship
Quantitative structure–activity relationship
Pharmacophore
References
External links
Molecular Property Explorer
QSAR World
Medicinal chemistry | 0.801504 | 0.977991 | 0.783863 |
Ecology | Ecology is the natural science of the relationships among living organisms, including humans, and their physical environment. Ecology considers organisms at the individual, population, community, ecosystem, and biosphere levels. Ecology overlaps with the closely related sciences of biogeography, evolutionary biology, genetics, ethology, and natural history.
Ecology is a branch of biology, and is the study of abundance, biomass, and distribution of organisms in the context of the environment. It encompasses life processes, interactions, and adaptations; movement of materials and energy through living communities; successional development of ecosystems; cooperation, competition, and predation within and between species; and patterns of biodiversity and its effect on ecosystem processes.
Ecology has practical applications in conservation biology, wetland management, natural resource management (agroecology, agriculture, forestry, agroforestry, fisheries, mining, tourism), urban planning (urban ecology), community health, economics, basic and applied science, and human social interaction (human ecology).
The word ecology was coined in 1866 by the German scientist Ernst Haeckel. The science of ecology as we know it today began with a group of American botanists in the 1890s. Evolutionary concepts relating to adaptation and natural selection are cornerstones of modern ecological theory.
Ecosystems are dynamically interacting systems of organisms, the communities they make up, and the non-living (abiotic) components of their environment. Ecosystem processes, such as primary production, nutrient cycling, and niche construction, regulate the flux of energy and matter through an environment. Ecosystems have biophysical feedback mechanisms that moderate processes acting on living (biotic) and abiotic components of the planet. Ecosystems sustain life-supporting functions and provide ecosystem services like biomass production (food, fuel, fiber, and medicine), the regulation of climate, global biogeochemical cycles, water filtration, soil formation, erosion control, flood protection, and many other natural features of scientific, historical, economic, or intrinsic value.
Levels, scope, and scale of organization
The scope of ecology contains a wide array of interacting levels of organization spanning micro-level (e.g., cells) to a planetary scale (e.g., biosphere) phenomena. Ecosystems, for example, contain abiotic resources and interacting life forms (i.e., individual organisms that aggregate into populations which aggregate into distinct ecological communities). Because ecosystems are dynamic and do not necessarily follow a linear successional route, changes might occur quickly or slowly over thousands of years before specific forest successional stages are brought about by biological processes. An ecosystem's area can vary greatly, from tiny to vast. A single tree is of little consequence to the classification of a forest ecosystem, but is critically relevant to organisms living in and on it. Several generations of an aphid population can exist over the lifespan of a single leaf. Each of those aphids, in turn, supports diverse bacterial communities. The nature of connections in ecological communities cannot be explained by knowing the details of each species in isolation, because the emergent pattern is neither revealed nor predicted until the ecosystem is studied as an integrated whole. Some ecological principles, however, do exhibit collective properties where the sum of the components explain the properties of the whole, such as birth rates of a population being equal to the sum of individual births over a designated time frame.
The main subdisciplines of ecology, population (or community) ecology and ecosystem ecology, exhibit a difference not only in scale but also in two contrasting paradigms in the field. The former focuses on organisms' distribution and abundance, while the latter focuses on materials and energy fluxes.
Hierarchy
The scale of ecological dynamics can operate like a closed system, such as aphids migrating on a single tree, while at the same time remaining open about broader scale influences, such as atmosphere or climate. Hence, ecologists classify ecosystems hierarchically by analyzing data collected from finer scale units, such as vegetation associations, climate, and soil types, and integrate this information to identify emergent patterns of uniform organization and processes that operate on local to regional, landscape, and chronological scales.
To structure the study of ecology into a conceptually manageable framework, the biological world is organized into a nested hierarchy, ranging in scale from genes, to cells, to tissues, to organs, to organisms, to species, to populations, to guilds, to communities, to ecosystems, to biomes, and up to the level of the biosphere. This framework forms a panarchy and exhibits non-linear behaviors; this means that "effect and cause are disproportionate, so that small changes to critical variables, such as the number of nitrogen fixers, can lead to disproportionate, perhaps irreversible, changes in the system properties."
Biodiversity
Biodiversity (an abbreviation of "biological diversity") describes the diversity of life from genes to ecosystems and spans every level of biological organization. The term has several interpretations, and there are many ways to index, measure, characterize, and represent its complex organization. Biodiversity includes species diversity, ecosystem diversity, and genetic diversity and scientists are interested in the way that this diversity affects the complex ecological processes operating at and among these respective levels. Biodiversity plays an important role in ecosystem services which by definition maintain and improve human quality of life. Conservation priorities and management techniques require different approaches and considerations to address the full ecological scope of biodiversity. Natural capital that supports populations is critical for maintaining ecosystem services and species migration (e.g., riverine fish runs and avian insect control) has been implicated as one mechanism by which those service losses are experienced. An understanding of biodiversity has practical applications for species and ecosystem-level conservation planners as they make management recommendations to consulting firms, governments, and industry.
Habitat
The habitat of a species describes the environment over which a species is known to occur and the type of community that is formed as a result. More specifically, "habitats can be defined as regions in environmental space that are composed of multiple dimensions, each representing a biotic or abiotic environmental variable; that is, any component or characteristic of the environment related directly (e.g. forage biomass and quality) or indirectly (e.g. elevation) to the use of a location by the animal." For example, a habitat might be an aquatic or terrestrial environment that can be further categorized as a montane or alpine ecosystem. Habitat shifts provide important evidence of competition in nature where one population changes relative to the habitats that most other individuals of the species occupy. For example, one population of a species of tropical lizard (Tropidurus hispidus) has a flattened body relative to the main populations that live in open savanna. The population that lives in an isolated rock outcrop hides in crevasses where its flattened body offers a selective advantage. Habitat shifts also occur in the developmental life history of amphibians, and in insects that transition from aquatic to terrestrial habitats. Biotope and habitat are sometimes used interchangeably, but the former applies to a community's environment, whereas the latter applies to a species' environment.
Niche
Definitions of the niche date back to 1917, but G. Evelyn Hutchinson made conceptual advances in 1957 by introducing a widely adopted definition: "the set of biotic and abiotic conditions in which a species is able to persist and maintain stable population sizes." The ecological niche is a central concept in the ecology of organisms and is sub-divided into the fundamental and the realized niche. The fundamental niche is the set of environmental conditions under which a species is able to persist. The realized niche is the set of environmental plus ecological conditions under which a species persists. The Hutchinsonian niche is defined more technically as a "Euclidean hyperspace whose dimensions are defined as environmental variables and whose size is a function of the number of values that the environmental values may assume for which an organism has positive fitness."
Biogeographical patterns and range distributions are explained or predicted through knowledge of a species' traits and niche requirements. Species have functional traits that are uniquely adapted to the ecological niche. A trait is a measurable property, phenotype, or characteristic of an organism that may influence its survival. Genes play an important role in the interplay of development and environmental expression of traits. Resident species evolve traits that are fitted to the selection pressures of their local environment. This tends to afford them a competitive advantage and discourages similarly adapted species from having an overlapping geographic range. The competitive exclusion principle states that two species cannot coexist indefinitely by living off the same limiting resource; one will always out-compete the other. When similarly adapted species overlap geographically, closer inspection reveals subtle ecological differences in their habitat or dietary requirements. Some models and empirical studies, however, suggest that disturbances can stabilize the co-evolution and shared niche occupancy of similar species inhabiting species-rich communities. The habitat plus the niche is called the ecotope, which is defined as the full range of environmental and biological variables affecting an entire species.
Niche construction
Organisms are subject to environmental pressures, but they also modify their habitats. The regulatory feedback between organisms and their environment can affect conditions from local (e.g., a beaver pond) to global scales, over time and even after death, such as decaying logs or silica skeleton deposits from marine organisms. The process and concept of ecosystem engineering are related to niche construction, but the former relates only to the physical modifications of the habitat whereas the latter also considers the evolutionary implications of physical changes to the environment and the feedback this causes on the process of natural selection. Ecosystem engineers are defined as: "organisms that directly or indirectly modulate the availability of resources to other species, by causing physical state changes in biotic or abiotic materials. In so doing they modify, maintain and create habitats."
The ecosystem engineering concept has stimulated a new appreciation for the influence that organisms have on the ecosystem and evolutionary process. The term "niche construction" is more often used in reference to the under-appreciated feedback mechanisms of natural selection imparting forces on the abiotic niche. An example of natural selection through ecosystem engineering occurs in the nests of social insects, including ants, bees, wasps, and termites. There is an emergent homeostasis or homeorhesis in the structure of the nest that regulates, maintains and defends the physiology of the entire colony. Termite mounds, for example, maintain a constant internal temperature through the design of air-conditioning chimneys. The structure of the nests themselves is subject to the forces of natural selection. Moreover, a nest can survive over successive generations, so that progeny inherit both genetic material and a legacy niche that was constructed before their time.
Biome
Biomes are larger units of organization that categorize regions of the Earth's ecosystems, mainly according to the structure and composition of vegetation. There are different methods to define the continental boundaries of biomes dominated by different functional types of vegetative communities that are limited in distribution by climate, precipitation, weather, and other environmental variables. Biomes include tropical rainforest, temperate broadleaf and mixed forest, temperate deciduous forest, taiga, tundra, hot desert, and polar desert. Other researchers have recently categorized other biomes, such as the human and oceanic microbiomes. To a microbe, the human body is a habitat and a landscape. Microbiomes were discovered largely through advances in molecular genetics, which have revealed a hidden richness of microbial diversity on the planet. The oceanic microbiome plays a significant role in the ecological biogeochemistry of the planet's oceans.
Biosphere
The largest scale of ecological organization is the biosphere: the total sum of ecosystems on the planet. Ecological relationships regulate the flux of energy, nutrients, and climate all the way up to the planetary scale. For example, the dynamic history of the planetary atmosphere's CO2 and O2 composition has been affected by the biogenic flux of gases coming from respiration and photosynthesis, with levels fluctuating over time in relation to the ecology and evolution of plants and animals. Ecological theory has also been used to explain self-emergent regulatory phenomena at the planetary scale: for example, the Gaia hypothesis is an example of holism applied in ecological theory. The Gaia hypothesis states that there is an emergent feedback loop generated by the metabolism of living organisms that maintains the core temperature of the Earth and atmospheric conditions within a narrow self-regulating range of tolerance.
Population ecology
Population ecology studies the dynamics of species populations and how these populations interact with the wider environment. A population consists of individuals of the same species that live, interact, and migrate through the same niche and habitat.
A primary law of population ecology is the Malthusian growth model which states, "a population will grow (or decline) exponentially as long as the environment experienced by all individuals in the population remains constant." Simplified population models usually starts with four variables: death, birth, immigration, and emigration.
An example of an introductory population model describes a closed population, such as on an island, where immigration and emigration does not take place. Hypotheses are evaluated with reference to a null hypothesis which states that random processes create the observed data. In these island models, the rate of population change is described by:
where N is the total number of individuals in the population, b and d are the per capita rates of birth and death respectively, and r is the per capita rate of population change.
Using these modeling techniques, Malthus' population principle of growth was later transformed into a model known as the logistic equation by Pierre Verhulst:
where N(t) is the number of individuals measured as biomass density as a function of time, t, r is the maximum per-capita rate of change commonly known as the intrinsic rate of growth, and is the crowding coefficient, which represents the reduction in population growth rate per individual added. The formula states that the rate of change in population size will grow to approach equilibrium, where, when the rates of increase and crowding are balanced, . A common, analogous model fixes the equilibrium, as K, which is known as the "carrying capacity."
Population ecology builds upon these introductory models to further understand demographic processes in real study populations. Commonly used types of data include life history, fecundity, and survivorship, and these are analyzed using mathematical techniques such as matrix algebra. The information is used for managing wildlife stocks and setting harvest quotas. In cases where basic models are insufficient, ecologists may adopt different kinds of statistical methods, such as the Akaike information criterion, or use models that can become mathematically complex as "several competing hypotheses are simultaneously confronted with the data."
Metapopulations and migration
The concept of metapopulations was defined in 1969 as "a population of populations which go extinct locally and recolonize". Metapopulation ecology is another statistical approach that is often used in conservation research. Metapopulation models simplify the landscape into patches of varying levels of quality, and metapopulations are linked by the migratory behaviours of organisms. Animal migration is set apart from other kinds of movement because it involves the seasonal departure and return of individuals from a habitat. Migration is also a population-level phenomenon, as with the migration routes followed by plants as they occupied northern post-glacial environments. Plant ecologists use pollen records that accumulate and stratify in wetlands to reconstruct the timing of plant migration and dispersal relative to historic and contemporary climates. These migration routes involved an expansion of the range as plant populations expanded from one area to another. There is a larger taxonomy of movement, such as commuting, foraging, territorial behavior, stasis, and ranging. Dispersal is usually distinguished from migration because it involves the one-way permanent movement of individuals from their birth population into another population.
In metapopulation terminology, migrating individuals are classed as emigrants (when they leave a region) or immigrants (when they enter a region), and sites are classed either as sources or sinks. A site is a generic term that refers to places where ecologists sample populations, such as ponds or defined sampling areas in a forest. Source patches are productive sites that generate a seasonal supply of juveniles that migrate to other patch locations. Sink patches are unproductive sites that only receive migrants; the population at the site will disappear unless rescued by an adjacent source patch or environmental conditions become more favorable. Metapopulation models examine patch dynamics over time to answer potential questions about spatial and demographic ecology. The ecology of metapopulations is a dynamic process of extinction and colonization. Small patches of lower quality (i.e., sinks) are maintained or rescued by a seasonal influx of new immigrants. A dynamic metapopulation structure evolves from year to year, where some patches are sinks in dry years and are sources when conditions are more favorable. Ecologists use a mixture of computer models and field studies to explain metapopulation structure.
Community ecology
Community ecology is the study of the interactions among a collection of species that inhabit the same geographic area. Community ecologists study the determinants of patterns and processes for two or more interacting species. Research in community ecology might measure species diversity in grasslands in relation to soil fertility. It might also include the analysis of predator-prey dynamics, competition among similar plant species, or mutualistic interactions between crabs and corals.
Ecosystem ecology
Ecosystems may be habitats within biomes that form an integrated whole and a dynamically responsive system having both physical and biological complexes. Ecosystem ecology is the science of determining the fluxes of materials (e.g. carbon, phosphorus) between different pools (e.g., tree biomass, soil organic material). Ecosystem ecologists attempt to determine the underlying causes of these fluxes. Research in ecosystem ecology might measure primary production (g C/m^2) in a wetland in relation to decomposition and consumption rates (g C/m^2/y). This requires an understanding of the community connections between plants (i.e., primary producers) and the decomposers (e.g., fungi and bacteria).
The underlying concept of an ecosystem can be traced back to 1864 in the published work of George Perkins Marsh ("Man and Nature"). Within an ecosystem, organisms are linked to the physical and biological components of their environment to which they are adapted. Ecosystems are complex adaptive systems where the interaction of life processes form self-organizing patterns across different scales of time and space. Ecosystems are broadly categorized as terrestrial, freshwater, atmospheric, or marine. Differences stem from the nature of the unique physical environments that shapes the biodiversity within each. A more recent addition to ecosystem ecology are technoecosystems, which are affected by or primarily the result of human activity.
Food webs
A food web is the archetypal ecological network. Plants capture solar energy and use it to synthesize simple sugars during photosynthesis. As plants grow, they accumulate nutrients and are eaten by grazing herbivores, and the energy is transferred through a chain of organisms by consumption. The simplified linear feeding pathways that move from a basal trophic species to a top consumer is called the food chain. Food chains in an ecological community create a complex food web. Food webs are a type of concept map that is used to illustrate and study pathways of energy and material flows.
Empirical measurements are generally restricted to a specific habitat, such as a cave or a pond, and principles gleaned from small-scale studies are extrapolated to larger systems. Feeding relations require extensive investigations, e.g. into the gut contents of organisms, which can be difficult to decipher, or stable isotopes can be used to trace the flow of nutrient diets and energy through a food web. Despite these limitations, food webs remain a valuable tool in understanding community ecosystems.
Food webs illustrate important principles of ecology: some species have many weak feeding links (e.g., omnivores) while some are more specialized with fewer stronger feeding links (e.g., primary predators). Such linkages explain how ecological communities remain stable over time and eventually can illustrate a "complete" web of life.
The disruption of food webs may have a dramatic impact on the ecology of individual species or whole ecosystems. For instance, the replacement of an ant species by another (invasive) ant species has been shown to affect how elephants reduce tree cover and thus the predation of lions on zebras.
Trophic levels
A trophic level (from Greek troph, τροφή, trophē, meaning "food" or "feeding") is "a group of organisms acquiring a considerable majority of its energy from the lower adjacent level (according to ecological pyramids) nearer the abiotic source." Links in food webs primarily connect feeding relations or trophism among species. Biodiversity within ecosystems can be organized into trophic pyramids, in which the vertical dimension represents feeding relations that become further removed from the base of the food chain up toward top predators, and the horizontal dimension represents the abundance or biomass at each level. When the relative abundance or biomass of each species is sorted into its respective trophic level, they naturally sort into a 'pyramid of numbers'.
Species are broadly categorized as autotrophs (or primary producers), heterotrophs (or consumers), and Detritivores (or decomposers). Autotrophs are organisms that produce their own food (production is greater than respiration) by photosynthesis or chemosynthesis. Heterotrophs are organisms that must feed on others for nourishment and energy (respiration exceeds production). Heterotrophs can be further sub-divided into different functional groups, including primary consumers (strict herbivores), secondary consumers (carnivorous predators that feed exclusively on herbivores), and tertiary consumers (predators that feed on a mix of herbivores and predators). Omnivores do not fit neatly into a functional category because they eat both plant and animal tissues. It has been suggested that omnivores have a greater functional influence as predators because compared to herbivores, they are relatively inefficient at grazing.
Trophic levels are part of the holistic or complex systems view of ecosystems. Each trophic level contains unrelated species that are grouped together because they share common ecological functions, giving a macroscopic view of the system. While the notion of trophic levels provides insight into energy flow and top-down control within food webs, it is troubled by the prevalence of omnivory in real ecosystems. This has led some ecologists to "reiterate that the notion that species clearly aggregate into discrete, homogeneous trophic levels is fiction." Nonetheless, recent studies have shown that real trophic levels do exist, but "above the herbivore trophic level, food webs are better characterized as a tangled web of omnivores."
Keystone species
A keystone species is a species that is connected to a disproportionately large number of other species in the food-web. Keystone species have lower levels of biomass in the trophic pyramid relative to the importance of their role. The many connections that a keystone species holds means that it maintains the organization and structure of entire communities. The loss of a keystone species results in a range of dramatic cascading effects (termed trophic cascades) that alters trophic dynamics, other food web connections, and can cause the extinction of other species. The term keystone species was coined by Robert Paine in 1969 and is a reference to the keystone architectural feature as the removal of a keystone species can result in a community collapse just as the removal of the keystone in an arch can result in the arch's loss of stability.
Sea otters (Enhydra lutris) are commonly cited as an example of a keystone species because they limit the density of sea urchins that feed on kelp. If sea otters are removed from the system, the urchins graze until the kelp beds disappear, and this has a dramatic effect on community structure. Hunting of sea otters, for example, is thought to have led indirectly to the extinction of the Steller's sea cow (Hydrodamalis gigas). While the keystone species concept has been used extensively as a conservation tool, it has been criticized for being poorly defined from an operational stance. It is difficult to experimentally determine what species may hold a keystone role in each ecosystem. Furthermore, food web theory suggests that keystone species may not be common, so it is unclear how generally the keystone species model can be applied.
Complexity
Complexity is understood as a large computational effort needed to piece together numerous interacting parts exceeding the iterative memory capacity of the human mind. Global patterns of biological diversity are complex. This biocomplexity stems from the interplay among ecological processes that operate and influence patterns at different scales that grade into each other, such as transitional areas or ecotones spanning landscapes. Complexity stems from the interplay among levels of biological organization as energy, and matter is integrated into larger units that superimpose onto the smaller parts. "What were wholes on one level become parts on a higher one." Small scale patterns do not necessarily explain large scale phenomena, otherwise captured in the expression (coined by Aristotle) 'the sum is greater than the parts'.
"Complexity in ecology is of at least six distinct types: spatial, temporal, structural, process, behavioral, and geometric." From these principles, ecologists have identified emergent and self-organizing phenomena that operate at different environmental scales of influence, ranging from molecular to planetary, and these require different explanations at each integrative level. Ecological complexity relates to the dynamic resilience of ecosystems that transition to multiple shifting steady-states directed by random fluctuations of history. Long-term ecological studies provide important track records to better understand the complexity and resilience of ecosystems over longer temporal and broader spatial scales. These studies are managed by the International Long Term Ecological Network (LTER). The longest experiment in existence is the Park Grass Experiment, which was initiated in 1856. Another example is the Hubbard Brook study, which has been in operation since 1960.
Holism
Holism remains a critical part of the theoretical foundation in contemporary ecological studies. Holism addresses the biological organization of life that self-organizes into layers of emergent whole systems that function according to non-reducible properties. This means that higher-order patterns of a whole functional system, such as an ecosystem, cannot be predicted or understood by a simple summation of the parts. "New properties emerge because the components interact, not because the basic nature of the components is changed."
Ecological studies are necessarily holistic as opposed to reductionistic. Holism has three scientific meanings or uses that identify with ecology: 1) the mechanistic complexity of ecosystems, 2) the practical description of patterns in quantitative reductionist terms where correlations may be identified but nothing is understood about the causal relations without reference to the whole system, which leads to 3) a metaphysical hierarchy whereby the causal relations of larger systems are understood without reference to the smaller parts. Scientific holism differs from mysticism that has appropriated the same term. An example of metaphysical holism is identified in the trend of increased exterior thickness in shells of different species. The reason for a thickness increase can be understood through reference to principles of natural selection via predation without the need to reference or understand the biomolecular properties of the exterior shells.
Relation to evolution
Ecology and evolutionary biology are considered sister disciplines of the life sciences. Natural selection, life history, development, adaptation, populations, and inheritance are examples of concepts that thread equally into ecological and evolutionary theory. Morphological, behavioural, and genetic traits, for example, can be mapped onto evolutionary trees to study the historical development of a species in relation to their functions and roles in different ecological circumstances. In this framework, the analytical tools of ecologists and evolutionists overlap as they organize, classify, and investigate life through common systematic principles, such as phylogenetics or the Linnaean system of taxonomy. The two disciplines often appear together, such as in the title of the journal Trends in Ecology and Evolution. There is no sharp boundary separating ecology from evolution, and they differ more in their areas of applied focus. Both disciplines discover and explain emergent and unique properties and processes operating across different spatial or temporal scales of organization. While the boundary between ecology and evolution is not always clear, ecologists study the abiotic and biotic factors that influence evolutionary processes, and evolution can be rapid, occurring on ecological timescales as short as one generation.
Behavioural ecology
All organisms can exhibit behaviours. Even plants express complex behaviour, including memory and communication. Behavioural ecology is the study of an organism's behaviour in its environment and its ecological and evolutionary implications. Ethology is the study of observable movement or behaviour in animals. This could include investigations of motile sperm of plants, mobile phytoplankton, zooplankton swimming toward the female egg, the cultivation of fungi by weevils, the mating dance of a salamander, or social gatherings of amoeba.
Adaptation is the central unifying concept in behavioural ecology. Behaviours can be recorded as traits and inherited in much the same way that eye and hair colour can. Behaviours can evolve by means of natural selection as adaptive traits conferring functional utilities that increases reproductive fitness.
Predator-prey interactions are an introductory concept into food-web studies as well as behavioural ecology. Prey species can exhibit different kinds of behavioural adaptations to predators, such as avoid, flee, or defend. Many prey species are faced with multiple predators that differ in the degree of danger posed. To be adapted to their environment and face predatory threats, organisms must balance their energy budgets as they invest in different aspects of their life history, such as growth, feeding, mating, socializing, or modifying their habitat. Hypotheses posited in behavioural ecology are generally based on adaptive principles of conservation, optimization, or efficiency. For example, "[t]he threat-sensitive predator avoidance hypothesis predicts that prey should assess the degree of threat posed by different predators and match their behaviour according to current levels of risk" or "[t]he optimal flight initiation distance occurs where expected postencounter fitness is maximized, which depends on the prey's initial fitness, benefits obtainable by not fleeing, energetic escape costs, and expected fitness loss due to predation risk."
Elaborate sexual displays and posturing are encountered in the behavioural ecology of animals. The birds-of-paradise, for example, sing and display elaborate ornaments during courtship. These displays serve a dual purpose of signalling healthy or well-adapted individuals and desirable genes. The displays are driven by sexual selection as an advertisement of quality of traits among suitors.
Cognitive ecology
Cognitive ecology integrates theory and observations from evolutionary ecology and neurobiology, primarily cognitive science, in order to understand the effect that animal interaction with their habitat has on their cognitive systems and how those systems restrict behavior within an ecological and evolutionary framework. "Until recently, however, cognitive scientists have not paid sufficient attention to the fundamental fact that cognitive traits evolved under particular natural settings. With consideration of the selection pressure on cognition, cognitive ecology can contribute intellectual coherence to the multidisciplinary study of cognition." As a study involving the 'coupling' or interactions between organism and environment, cognitive ecology is closely related to enactivism, a field based upon the view that "...we must see the organism and environment as bound together in reciprocal specification and selection...".
Social ecology
Social-ecological behaviours are notable in the social insects, slime moulds, social spiders, human society, and naked mole-rats where eusocialism has evolved. Social behaviours include reciprocally beneficial behaviours among kin and nest mates and evolve from kin and group selection. Kin selection explains altruism through genetic relationships, whereby an altruistic behaviour leading to death is rewarded by the survival of genetic copies distributed among surviving relatives. The social insects, including ants, bees, and wasps are most famously studied for this type of relationship because the male drones are clones that share the same genetic make-up as every other male in the colony. In contrast, group selectionists find examples of altruism among non-genetic relatives and explain this through selection acting on the group; whereby, it becomes selectively advantageous for groups if their members express altruistic behaviours to one another. Groups with predominantly altruistic members survive better than groups with predominantly selfish members.
Coevolution
Ecological interactions can be classified broadly into a host and an associate relationship. A host is any entity that harbours another that is called the associate. Relationships between species that are mutually or reciprocally beneficial are called mutualisms. Examples of mutualism include fungus-growing ants employing agricultural symbiosis, bacteria living in the guts of insects and other organisms, the fig wasp and yucca moth pollination complex, lichens with fungi and photosynthetic algae, and corals with photosynthetic algae. If there is a physical connection between host and associate, the relationship is called symbiosis. Approximately 60% of all plants, for example, have a symbiotic relationship with arbuscular mycorrhizal fungi living in their roots forming an exchange network of carbohydrates for mineral nutrients.
Indirect mutualisms occur where the organisms live apart. For example, trees living in the equatorial regions of the planet supply oxygen into the atmosphere that sustains species living in distant polar regions of the planet. This relationship is called commensalism because many others receive the benefits of clean air at no cost or harm to trees supplying the oxygen. If the associate benefits while the host suffers, the relationship is called parasitism. Although parasites impose a cost to their host (e.g., via damage to their reproductive organs or propagules, denying the services of a beneficial partner), their net effect on host fitness is not necessarily negative and, thus, becomes difficult to forecast. Co-evolution is also driven by competition among species or among members of the same species under the banner of reciprocal antagonism, such as grasses competing for growth space. The Red Queen Hypothesis, for example, posits that parasites track down and specialize on the locally common genetic defense systems of its host that drives the evolution of sexual reproduction to diversify the genetic constituency of populations responding to the antagonistic pressure.
Biogeography
Biogeography (an amalgamation of biology and geography) is the comparative study of the geographic distribution of organisms and the corresponding evolution of their traits in space and time. The Journal of Biogeography was established in 1974. Biogeography and ecology share many of their disciplinary roots. For example, the theory of island biogeography, published by the Robert MacArthur and Edward O. Wilson in 1967 is considered one of the fundamentals of ecological theory.
Biogeography has a long history in the natural sciences concerning the spatial distribution of plants and animals. Ecology and evolution provide the explanatory context for biogeographical studies. Biogeographical patterns result from ecological processes that influence range distributions, such as migration and dispersal. and from historical processes that split populations or species into different areas. The biogeographic processes that result in the natural splitting of species explain much of the modern distribution of the Earth's biota. The splitting of lineages in a species is called vicariance biogeography and it is a sub-discipline of biogeography. There are also practical applications in the field of biogeography concerning ecological systems and processes. For example, the range and distribution of biodiversity and invasive species responding to climate change is a serious concern and active area of research in the context of global warming.
r/K selection theory
A population ecology concept is r/K selection theory, one of the first predictive models in ecology used to explain life-history evolution. The premise behind the r/K selection model is that natural selection pressures change according to population density. For example, when an island is first colonized, density of individuals is low. The initial increase in population size is not limited by competition, leaving an abundance of available resources for rapid population growth. These early phases of population growth experience density-independent forces of natural selection, which is called r-selection. As the population becomes more crowded, it approaches the island's carrying capacity, thus forcing individuals to compete more heavily for fewer available resources. Under crowded conditions, the population experiences density-dependent forces of natural selection, called K-selection.
In the r/K-selection model, the first variable r is the intrinsic rate of natural increase in population size and the second variable K is the carrying capacity of a population. Different species evolve different life-history strategies spanning a continuum between these two selective forces. An r-selected species is one that has high birth rates, low levels of parental investment, and high rates of mortality before individuals reach maturity. Evolution favours high rates of fecundity in r-selected species. Many kinds of insects and invasive species exhibit r-selected characteristics. In contrast, a K-selected species has low rates of fecundity, high levels of parental investment in the young, and low rates of mortality as individuals mature. Humans and elephants are examples of species exhibiting K-selected characteristics, including longevity and efficiency in the conversion of more resources into fewer offspring.
Molecular ecology
The important relationship between ecology and genetic inheritance predates modern techniques for molecular analysis. Molecular ecological research became more feasible with the development of rapid and accessible genetic technologies, such as the polymerase chain reaction (PCR). The rise of molecular technologies and the influx of research questions into this new ecological field resulted in the publication Molecular Ecology in 1992. Molecular ecology uses various analytical techniques to study genes in an evolutionary and ecological context. In 1994, John Avise also played a leading role in this area of science with the publication of his book, Molecular Markers, Natural History and Evolution. Newer technologies opened a wave of genetic analysis into organisms once difficult to study from an ecological or evolutionary standpoint, such as bacteria, fungi, and nematodes. Molecular ecology engendered a new research paradigm for investigating ecological questions considered otherwise intractable. Molecular investigations revealed previously obscured details in the tiny intricacies of nature and improved resolution into probing questions about behavioural and biogeographical ecology. For example, molecular ecology revealed promiscuous sexual behaviour and multiple male partners in tree swallows previously thought to be socially monogamous. In a biogeographical context, the marriage between genetics, ecology, and evolution resulted in a new sub-discipline called phylogeography.
Human ecology
Ecology is as much a biological science as it is a human science. Human ecology is an interdisciplinary investigation into the ecology of our species. "Human ecology may be defined: (1) from a bioecological standpoint as the study of man as the ecological dominant in plant and animal communities and systems; (2) from a bioecological standpoint as simply another animal affecting and being affected by his physical environment; and (3) as a human being, somehow different from animal life in general, interacting with physical and modified environments in a distinctive and creative way. A truly interdisciplinary human ecology will most likely address itself to all three." The term was formally introduced in 1921, but many sociologists, geographers, psychologists, and other disciplines were interested in human relations to natural systems centuries prior, especially in the late 19th century.
The ecological complexities human beings are facing through the technological transformation of the planetary biome has brought on the Anthropocene. The unique set of circumstances has generated the need for a new unifying science called coupled human and natural systems that builds upon, but moves beyond the field of human ecology. Ecosystems tie into human societies through the critical and all-encompassing life-supporting functions they sustain. In recognition of these functions and the incapability of traditional economic valuation methods to see the value in ecosystems, there has been a surge of interest in social-natural capital, which provides the means to put a value on the stock and use of information and materials stemming from ecosystem goods and services. Ecosystems produce, regulate, maintain, and supply services of critical necessity and beneficial to human health (cognitive and physiological), economies, and they even provide an information or reference function as a living library giving opportunities for science and cognitive development in children engaged in the complexity of the natural world. Ecosystems relate importantly to human ecology as they are the ultimate base foundation of global economics as every commodity, and the capacity for exchange ultimately stems from the ecosystems on Earth.
Restoration Ecology
Ecology is an employed science of restoration, repairing disturbed sites through human intervention, in natural resource management, and in environmental impact assessments. Edward O. Wilson predicted in 1992 that the 21st century "will be the era of restoration in ecology". Ecological science has boomed in the industrial investment of restoring ecosystems and their processes in abandoned sites after disturbance. Natural resource managers, in forestry, for example, employ ecologists to develop, adapt, and implement ecosystem based methods into the planning, operation, and restoration phases of land-use. Another example of conservation is seen on the east coast of the United States in Boston, MA. The city of Boston implemented the Wetland Ordinance, improving the stability of their wetland environments by implementing soil amendments that will improve groundwater storage and flow, and trimming or removal of vegetation that could cause harm to water quality. Ecological science is used in the methods of sustainable harvesting, disease, and fire outbreak management, in fisheries stock management, for integrating land-use with protected areas and communities, and conservation in complex geo-political landscapes.
Relation to the environment
The environment of ecosystems includes both physical parameters and biotic attributes. It is dynamically interlinked and contains resources for organisms at any time throughout their life cycle. Like ecology, the term environment has different conceptual meanings and overlaps with the concept of nature. Environment "includes the physical world, the social world of human relations and the built world of human creation." The physical environment is external to the level of biological organization under investigation, including abiotic factors such as temperature, radiation, light, chemistry, climate and geology. The biotic environment includes genes, cells, organisms, members of the same species (conspecifics) and other species that share a habitat.
The distinction between external and internal environments, however, is an abstraction parsing life and environment into units or facts that are inseparable in reality. There is an interpenetration of cause and effect between the environment and life. The laws of thermodynamics, for example, apply to ecology by means of its physical state. With an understanding of metabolic and thermodynamic principles, a complete accounting of energy and material flow can be traced through an ecosystem. In this way, the environmental and ecological relations are studied through reference to conceptually manageable and isolated material parts. After the effective environmental components are understood through reference to their causes; however, they conceptually link back together as an integrated whole, or holocoenotic system as it was once called. This is known as the dialectical approach to ecology. The dialectical approach examines the parts but integrates the organism and the environment into a dynamic whole (or umwelt). Change in one ecological or environmental factor can concurrently affect the dynamic state of an entire ecosystem.
Disturbance and resilience
A disturbance is any process that changes or removes biomass from a community, such as a fire, flood, drought, or predation. Disturbances are both the cause and product of natural fluctuations within an ecological community. Biodiversity can protect ecosystems from disturbances.
The effect of a disturbance is often hard to predict, but there are numerous examples in which a single species can massively disturb an ecosystem. For example, a single-celled protozoan has been able to kill up to 100% of sea urchins in some coral reefs in the Red Sea and Western Indian Ocean. Sea urchins enable complex reef ecosystems to thrive by eating algae that would otherwise inhibit coral growth. Similarly, invasive species can wreak havoc on ecosystems. For instance, invasive Burmese pythons have caused a 98% decline of small mammals in the Everglades.
Metabolism and the early atmosphere
The Earth was formed approximately 4.5 billion years ago. As it cooled and a crust and oceans formed, its atmosphere transformed from being dominated by hydrogen to one composed mostly of methane and ammonia. Over the next billion years, the metabolic activity of life transformed the atmosphere into a mixture of carbon dioxide, nitrogen, and water vapor. These gases changed the way that light from the sun hit the Earth's surface and greenhouse effects trapped heat. There were untapped sources of free energy within the mixture of reducing and oxidizing gasses that set the stage for primitive ecosystems to evolve and, in turn, the atmosphere also evolved.
Throughout history, the Earth's atmosphere and biogeochemical cycles have been in a dynamic equilibrium with planetary ecosystems. The history is characterized by periods of significant transformation followed by millions of years of stability. The evolution of the earliest organisms, likely anaerobic methanogen microbes, started the process by converting atmospheric hydrogen into methane (4H2 + CO2 → CH4 + 2H2O). Anoxygenic photosynthesis reduced hydrogen concentrations and increased atmospheric methane, by converting hydrogen sulfide into water or other sulfur compounds (for example, 2H2S + CO2 + hv → CH2O + H2O + 2S). Early forms of fermentation also increased levels of atmospheric methane. The transition to an oxygen-dominant atmosphere (the Great Oxidation) did not begin until approximately 2.4–2.3 billion years ago, but photosynthetic processes started 0.3 to 1 billion years prior.
Radiation: heat, temperature and light
The biology of life operates within a certain range of temperatures. Heat is a form of energy that regulates temperature. Heat affects growth rates, activity, behaviour, and primary production. Temperature is largely dependent on the incidence of solar radiation. The latitudinal and longitudinal spatial variation of temperature greatly affects climates and consequently the distribution of biodiversity and levels of primary production in different ecosystems or biomes across the planet. Heat and temperature relate importantly to metabolic activity. Poikilotherms, for example, have a body temperature that is largely regulated and dependent on the temperature of the external environment. In contrast, homeotherms regulate their internal body temperature by expending metabolic energy.
There is a relationship between light, primary production, and ecological energy budgets. Sunlight is the primary input of energy into the planet's ecosystems. Light is composed of electromagnetic energy of different wavelengths. Radiant energy from the sun generates heat, provides photons of light measured as active energy in the chemical reactions of life, and also acts as a catalyst for genetic mutation. Plants, algae, and some bacteria absorb light and assimilate the energy through photosynthesis. Organisms capable of assimilating energy by photosynthesis or through inorganic fixation of H2S are autotrophs. Autotrophs—responsible for primary production—assimilate light energy which becomes metabolically stored as potential energy in the form of biochemical enthalpic bonds.
Physical environments
Water
Diffusion of carbon dioxide and oxygen is approximately 10,000 times slower in water than in air. When soils are flooded, they quickly lose oxygen, becoming hypoxic (an environment with O2 concentration below 2 mg/liter) and eventually completely anoxic where anaerobic bacteria thrive among the roots. Water also influences the intensity and spectral composition of light as it reflects off the water surface and submerged particles. Aquatic plants exhibit a wide variety of morphological and physiological adaptations that allow them to survive, compete, and diversify in these environments. For example, their roots and stems contain large air spaces (aerenchyma) that regulate the efficient transportation of gases (for example, CO2 and O2) used in respiration and photosynthesis. Salt water plants (halophytes) have additional specialized adaptations, such as the development of special organs for shedding salt and osmoregulating their internal salt (NaCl) concentrations, to live in estuarine, brackish, or oceanic environments. Anaerobic soil microorganisms in aquatic environments use nitrate, manganese ions, ferric ions, sulfate, carbon dioxide, and some organic compounds; other microorganisms are facultative anaerobes and use oxygen during respiration when the soil becomes drier. The activity of soil microorganisms and the chemistry of the water reduces the oxidation-reduction potentials of the water. Carbon dioxide, for example, is reduced to methane (CH4) by methanogenic bacteria. The physiology of fish is also specially adapted to compensate for environmental salt levels through osmoregulation. Their gills form electrochemical gradients that mediate salt excretion in salt water and uptake in fresh water.
Gravity
The shape and energy of the land are significantly affected by gravitational forces. On a large scale, the distribution of gravitational forces on the earth is uneven and influences the shape and movement of tectonic plates as well as influencing geomorphic processes such as orogeny and erosion. These forces govern many of the geophysical properties and distributions of ecological biomes across the Earth. On the organismal scale, gravitational forces provide directional cues for plant and fungal growth (gravitropism), orientation cues for animal migrations, and influence the biomechanics and size of animals. Ecological traits, such as allocation of biomass in trees during growth are subject to mechanical failure as gravitational forces influence the position and structure of branches and leaves. The cardiovascular systems of animals are functionally adapted to overcome the pressure and gravitational forces that change according to the features of organisms (e.g., height, size, shape), their behaviour (e.g., diving, running, flying), and the habitat occupied (e.g., water, hot deserts, cold tundra).
Pressure
Climatic and osmotic pressure places physiological constraints on organisms, especially those that fly and respire at high altitudes, or dive to deep ocean depths. These constraints influence vertical limits of ecosystems in the biosphere, as organisms are physiologically sensitive and adapted to atmospheric and osmotic water pressure differences. For example, oxygen levels decrease with decreasing pressure and are a limiting factor for life at higher altitudes. Water transportation by plants is another important ecophysiological process affected by osmotic pressure gradients. Water pressure in the depths of oceans requires that organisms adapt to these conditions. For example, diving animals such as whales, dolphins, and seals are specially adapted to deal with changes in sound due to water pressure differences. Differences between hagfish species provide another example of adaptation to deep-sea pressure through specialized protein adaptations.
Wind and turbulence
Turbulent forces in air and water affect the environment and ecosystem distribution, form, and dynamics. On a planetary scale, ecosystems are affected by circulation patterns in the global trade winds. Wind power and the turbulent forces it creates can influence heat, nutrient, and biochemical profiles of ecosystems. For example, wind running over the surface of a lake creates turbulence, mixing the water column and influencing the environmental profile to create thermally layered zones, affecting how fish, algae, and other parts of the aquatic ecosystem are structured. Wind speed and turbulence also influence evapotranspiration rates and energy budgets in plants and animals. Wind speed, temperature and moisture content can vary as winds travel across different land features and elevations. For example, the westerlies come into contact with the coastal and interior mountains of western North America to produce a rain shadow on the leeward side of the mountain. The air expands and moisture condenses as the winds increase in elevation; this is called orographic lift and can cause precipitation. This environmental process produces spatial divisions in biodiversity, as species adapted to wetter conditions are range-restricted to the coastal mountain valleys and unable to migrate across the xeric ecosystems (e.g., of the Columbia Basin in western North America) to intermix with sister lineages that are segregated to the interior mountain systems.
Fire
Plants convert carbon dioxide into biomass and emit oxygen into the atmosphere. By approximately 350 million years ago (the end of the Devonian period), photosynthesis had brought the concentration of atmospheric oxygen above 17%, which allowed combustion to occur. Fire releases CO2 and converts fuel into ash and tar. Fire is a significant ecological parameter that raises many issues pertaining to its control and suppression. While the issue of fire in relation to ecology and plants has been recognized for a long time, Charles Cooper brought attention to the issue of forest fires in relation to the ecology of forest fire suppression and management in the 1960s.
Native North Americans were among the first to influence fire regimes by controlling their spread near their homes or by lighting fires to stimulate the production of herbaceous foods and basketry materials. Fire creates a heterogeneous ecosystem age and canopy structure, and the altered soil nutrient supply and cleared canopy structure opens new ecological niches for seedling establishment. Most ecosystems are adapted to natural fire cycles. Plants, for example, are equipped with a variety of adaptations to deal with forest fires. Some species (e.g., Pinus halepensis) cannot germinate until after their seeds have lived through a fire or been exposed to certain compounds from smoke. Environmentally triggered germination of seeds is called serotiny. Fire plays a major role in the persistence and resilience of ecosystems.
Soils
Soil is the living top layer of mineral and organic dirt that covers the surface of the planet. It is the chief organizing centre of most ecosystem functions, and it is of critical importance in agricultural science and ecology. The decomposition of dead organic matter (for example, leaves on the forest floor), results in soils containing minerals and nutrients that feed into plant production. The whole of the planet's soil ecosystems is called the pedosphere where a large biomass of the Earth's biodiversity organizes into trophic levels. Invertebrates that feed and shred larger leaves, for example, create smaller bits for smaller organisms in the feeding chain. Collectively, these organisms are the detritivores that regulate soil formation. Tree roots, fungi, bacteria, worms, ants, beetles, centipedes, spiders, mammals, birds, reptiles, amphibians, and other less familiar creatures all work to create the trophic web of life in soil ecosystems. Soils form composite phenotypes where inorganic matter is enveloped into the physiology of a whole community. As organisms feed and migrate through soils they physically displace materials, an ecological process called bioturbation. This aerates soils and stimulates heterotrophic growth and production. Soil microorganisms are influenced by and are fed back into the trophic dynamics of the ecosystem. No single axis of causality can be discerned to segregate the biological from geomorphological systems in soils. Paleoecological studies of soils places the origin for bioturbation to a time before the Cambrian period. Other events, such as the evolution of trees and the colonization of land in the Devonian period played a significant role in the early development of ecological trophism in soils.
Biogeochemistry and climate
Ecologists study and measure nutrient budgets to understand how these materials are regulated, flow, and recycled through the environment. This research has led to an understanding that there is global feedback between ecosystems and the physical parameters of this planet, including minerals, soil, pH, ions, water, and atmospheric gases. Six major elements (hydrogen, carbon, nitrogen, oxygen, sulfur, and phosphorus; H, C, N, O, S, and P) form the constitution of all biological macromolecules and feed into the Earth's geochemical processes. From the smallest scale of biology, the combined effect of billions upon billions of ecological processes amplify and ultimately regulate the biogeochemical cycles of the Earth. Understanding the relations and cycles mediated between these elements and their ecological pathways has significant bearing toward understanding global biogeochemistry.
The ecology of global carbon budgets gives one example of the linkage between biodiversity and biogeochemistry. It is estimated that the Earth's oceans hold 40,000 gigatonnes (Gt) of carbon, that vegetation and soil hold 2070 Gt, and that fossil fuel emissions are 6.3 Gt carbon per year. There have been major restructurings in these global carbon budgets during the Earth's history, regulated to a large extent by the ecology of the land. For example, through the early-mid Eocene volcanic outgassing, the oxidation of methane stored in wetlands, and seafloor gases increased atmospheric CO2 (carbon dioxide) concentrations to levels as high as 3500 ppm.
In the Oligocene, from twenty-five to thirty-two million years ago, there was another significant restructuring of the global carbon cycle as grasses evolved a new mechanism of photosynthesis, C4 photosynthesis, and expanded their ranges. This new pathway evolved in response to the drop in atmospheric CO2 concentrations below 550 ppm. The relative abundance and distribution of biodiversity alters the dynamics between organisms and their environment such that ecosystems can be both cause and effect in relation to climate change. Human-driven modifications to the planet's ecosystems (e.g., disturbance, biodiversity loss, agriculture) contributes to rising atmospheric greenhouse gas levels. Transformation of the global carbon cycle in the next century is projected to raise planetary temperatures, lead to more extreme fluctuations in weather, alter species distributions, and increase extinction rates. The effect of global warming is already being registered in melting glaciers, melting mountain ice caps, and rising sea levels. Consequently, species distributions are changing along waterfronts and in continental areas where migration patterns and breeding grounds are tracking the prevailing shifts in climate. Large sections of permafrost are also melting to create a new mosaic of flooded areas having increased rates of soil decomposition activity that raises methane (CH4) emissions. There is concern over increases in atmospheric methane in the context of the global carbon cycle, because methane is a greenhouse gas that is 23 times more effective at absorbing long-wave radiation than CO2 on a 100-year time scale. Hence, there is a relationship between global warming, decomposition and respiration in soils and wetlands producing significant climate feedbacks and globally altered biogeochemical cycles.
History
Early beginnings
Ecology has a complex origin, due in large part to its interdisciplinary nature. Ancient Greek philosophers such as Hippocrates and Aristotle were among the first to record observations on natural history. However, they viewed life in terms of essentialism, where species were conceptualized as static unchanging things while varieties were seen as aberrations of an idealized type. This contrasts against the modern understanding of ecological theory where varieties are viewed as the real phenomena of interest and having a role in the origins of adaptations by means of natural selection. Early conceptions of ecology, such as a balance and regulation in nature can be traced to Herodotus (died c. 425 BC), who described one of the earliest accounts of mutualism in his observation of "natural dentistry". Basking Nile crocodiles, he noted, would open their mouths to give sandpipers safe access to pluck leeches out, giving nutrition to the sandpiper and oral hygiene for the crocodile. Aristotle was an early influence on the philosophical development of ecology. He and his student Theophrastus made extensive observations on plant and animal migrations, biogeography, physiology, and their behavior, giving an early analogue to the modern concept of an ecological niche.
Ernst Haeckel (left) and Eugenius Warming (right), two founders of ecology
Ecological concepts such as food chains, population regulation, and productivity were first developed in the 1700s, through the published works of microscopist Antonie van Leeuwenhoek (1632–1723) and botanist Richard Bradley (1688?–1732). Biogeographer Alexander von Humboldt (1769–1859) was an early pioneer in ecological thinking and was among the first to recognize ecological gradients, where species are replaced or altered in form along environmental gradients, such as a cline forming along a rise in elevation. Humboldt drew inspiration from Isaac Newton, as he developed a form of "terrestrial physics". In Newtonian fashion, he brought a scientific exactitude for measurement into natural history and even alluded to concepts that are the foundation of a modern ecological law on species-to-area relationships. Natural historians, such as Humboldt, James Hutton, and Jean-Baptiste Lamarck (among others) laid the foundations of the modern ecological sciences. The term "ecology" was coined by Ernst Haeckel in his book Generelle Morphologie der Organismen (1866). Haeckel was a zoologist, artist, writer, and later in life a professor of comparative anatomy.
Opinions differ on who was the founder of modern ecological theory. Some mark Haeckel's definition as the beginning; others say it was Eugenius Warming with the writing of Oecology of Plants: An Introduction to the Study of Plant Communities (1895), or Carl Linnaeus' principles on the economy of nature that matured in the early 18th century. Linnaeus founded an early branch of ecology that he called the economy of nature. His works influenced Charles Darwin, who adopted Linnaeus' phrase on the economy or polity of nature in The Origin of Species. Linnaeus was the first to frame the balance of nature as a testable hypothesis. Haeckel, who admired Darwin's work, defined ecology in reference to the economy of nature, which has led some to question whether ecology and the economy of nature are synonymous.
From Aristotle until Darwin, the natural world was predominantly considered static and unchanging. Prior to The Origin of Species, there was little appreciation or understanding of the dynamic and reciprocal relations between organisms, their adaptations, and the environment. An exception is the 1789 publication Natural History of Selborne by Gilbert White (1720–1793), considered by some to be one of the earliest texts on ecology. While Charles Darwin is mainly noted for his treatise on evolution, he was one of the founders of soil ecology, and he made note of the first ecological experiment in The Origin of Species. Evolutionary theory changed the way that researchers approached the ecological sciences.
Since 1900
Modern ecology is a young science that first attracted substantial scientific attention toward the end of the 19th century (around the same time that evolutionary studies were gaining scientific interest). The scientist Ellen Swallow Richards adopted the term "oekology" (which eventually morphed into home economics) in the U.S. as early as 1892.
In the early 20th century, ecology transitioned from a more descriptive form of natural history to a more analytical form of scientific natural history. Frederic Clements published the first American ecology book in 1905, presenting the idea of plant communities as a superorganism. This publication launched a debate between ecological holism and individualism that lasted until the 1970s. Clements' superorganism concept proposed that ecosystems progress through regular and determined stages of seral development that are analogous to the developmental stages of an organism. The Clementsian paradigm was challenged by Henry Gleason, who stated that ecological communities develop from the unique and coincidental association of individual organisms. This perceptual shift placed the focus back onto the life histories of individual organisms and how this relates to the development of community associations.
The Clementsian superorganism theory was an overextended application of an idealistic form of holism. The term "holism" was coined in 1926 by Jan Christiaan Smuts, a South African general and polarizing historical figure who was inspired by Clements' superorganism concept. Around the same time, Charles Elton pioneered the concept of food chains in his classical book Animal Ecology. Elton defined ecological relations using concepts of food chains, food cycles, and food size, and described numerical relations among different functional groups and their relative abundance. Elton's 'food cycle' was replaced by 'food web' in a subsequent ecological text. Alfred J. Lotka brought in many theoretical concepts applying thermodynamic principles to ecology.
In 1942, Raymond Lindeman wrote a landmark paper on the trophic dynamics of ecology, which was published posthumously after initially being rejected for its theoretical emphasis. Trophic dynamics became the foundation for much of the work to follow on energy and material flow through ecosystems. Robert MacArthur advanced mathematical theory, predictions, and tests in ecology in the 1950s, which inspired a resurgent school of theoretical mathematical ecologists. Ecology also has developed through contributions from other nations, including Russia's Vladimir Vernadsky and his founding of the biosphere concept in the 1920s and Japan's Kinji Imanishi and his concepts of harmony in nature and habitat segregation in the 1950s. Scientific recognition of contributions to ecology from non-English-speaking cultures is hampered by language and translation barriers.
Ecology surged in popular and scientific interest during the 1960–1970s environmental movement. There are strong historical and scientific ties between ecology, environmental management, and protection. The historical emphasis and poetic naturalistic writings advocating the protection of wild places by notable ecologists in the history of conservation biology, such as Aldo Leopold and Arthur Tansley, have been seen as far removed from urban centres where, it is claimed, the concentration of pollution and environmental degradation is located. Palamar (2008) notes an overshadowing by mainstream environmentalism of pioneering women in the early 1900s who fought for urban health ecology (then called euthenics) and brought about changes in environmental legislation. Women such as Ellen Swallow Richards and Julia Lathrop, among others, were precursors to the more popularized environmental movements after the 1950s.
In 1962, marine biologist and ecologist Rachel Carson's book Silent Spring helped to mobilize the environmental movement by alerting the public to toxic pesticides, such as DDT, bioaccumulating in the environment. Carson used ecological science to link the release of environmental toxins to human and ecosystem health. Since then, ecologists have worked to bridge their understanding of the degradation of the planet's ecosystems with environmental politics, law, restoration, and natural resources management.
See also
Carrying capacity
Chemical ecology
Climate justice
Circles of Sustainability
Cultural ecology
Dialectical naturalism
Ecological death
Ecological empathy
Ecological overshoot
Ecological psychology
Ecology movement
Ecosophy
Ecopsychology
Human ecology
Industrial ecology
Information ecology
Landscape ecology
Natural resource
Normative science
Philosophy of ecology
Political ecology
Theoretical ecology
Sensory ecology
Sexecology
Spiritual ecology
Sustainable development
Lists
Glossary of ecology
Index of biology articles
List of ecologists
Outline of biology
Terminology of ecology
Notes
References
External links
The Nature Education Knowledge Project: Ecology
Biogeochemistry
Emergence | 0.784537 | 0.999069 | 0.783806 |
Reduction potential | Redox potential (also known as oxidation / reduction potential, ORP, pe, , or ) is a measure of the tendency of a chemical species to acquire electrons from or lose electrons to an electrode and thereby be reduced or oxidised respectively. Redox potential is expressed in volts (V). Each species has its own intrinsic redox potential; for example, the more positive the reduction potential (reduction potential is more often used due to general formalism in electrochemistry), the greater the species' affinity for electrons and tendency to be reduced.
Measurement and interpretation
In aqueous solutions, redox potential is a measure of the tendency of the solution to either gain or lose electrons in a reaction. A solution with a higher (more positive) reduction potential than some other molecule will have a tendency to gain electrons from this molecule (i.e. to be reduced by oxidizing this other molecule) and a solution with a lower (more negative) reduction potential will have a tendency to lose electrons to other substances (i.e. to be oxidized by reducing the other substance). Because the absolute potentials are next to impossible to accurately measure, reduction potentials are defined relative to a reference electrode. Reduction potentials of aqueous solutions are determined by measuring the potential difference between an inert sensing electrode in contact with the solution and a stable reference electrode connected to the solution by a salt bridge.
The sensing electrode acts as a platform for electron transfer to or from the reference half cell; it is typically made of platinum, although gold and graphite can be used as well. The reference half cell consists of a redox standard of known potential. The standard hydrogen electrode (SHE) is the reference from which all standard redox potentials are determined, and has been assigned an arbitrary half cell potential of 0.0 V. However, it is fragile and impractical for routine laboratory use. Therefore, other more stable reference electrodes such as silver chloride and saturated calomel (SCE) are commonly used because of their more reliable performance.
Although measurement of the redox potential in aqueous solutions is relatively straightforward, many factors limit its interpretation, such as effects of solution temperature and pH, irreversible reactions, slow electrode kinetics, non-equilibrium, presence of multiple redox couples, electrode poisoning, small exchange currents, and inert redox couples. Consequently, practical measurements seldom correlate with calculated values. Nevertheless, reduction potential measurement has proven useful as an analytical tool in monitoring changes in a system rather than determining their absolute value (e.g. process control and titrations).
Explanation
Similar to how the concentration of hydrogen ions determines the acidity or pH of an aqueous solution, the tendency of electron transfer between a chemical species and an electrode determines the redox potential of an electrode couple. Like pH, redox potential represents how easily electrons are transferred to or from species in solution. Redox potential characterises the ability under the specific condition of a chemical species to lose or gain electrons instead of the amount of electrons available for oxidation or reduction.
The notion of is used with Pourbaix diagrams. is a dimensionless number and can easily be related to EH by the following relationship:
where, is the thermal voltage, with , the gas constant, , the absolute temperature in Kelvin (298.15 K = 25 °C = 77 °F), , the Faraday constant (96 485 coulomb/mol of ), and λ = ln(10) ≈ 2.3026.
In fact, is defined as the negative logarithm of the free electron concentration in solution, and is directly proportional to the redox potential. Sometimes is used as a unit of reduction potential instead of , for example, in environmental chemistry. If one normalizes of hydrogen to zero, one obtains the relation at room temperature. This notion is useful for understanding redox potential, although the transfer of electrons, rather than the absolute concentration of free electrons in thermal equilibrium, is how one usually thinks of redox potential. Theoretically, however, the two approaches are equivalent.
Conversely, one could define a potential corresponding to pH as a potential difference between a solute and pH neutral water, separated by porous membrane (that is permeable to hydrogen ions). Such potential differences actually do occur from differences in acidity on biological membranes. This potential (where pH neutral water is set to 0 V) is analogous with redox potential (where standardized hydrogen solution is set to 0 V), but instead of hydrogen ions, electrons are transferred across in the redox case. Both pH and redox potentials are properties of solutions, not of elements or chemical compounds themselves, and depend on concentrations, temperature etc.
The table below shows a few reduction potentials, which can be changed to oxidation potentials by reversing the sign. Reducers donate electrons to (or "reduce") oxidizing agents, which are said to "be reduced by" the reducer. The reducer is stronger when it has a more negative reduction potential and weaker when it has a more positive reduction potential. The more positive the reduction potential the greater the species' affinity for electrons and tendency to be reduced. The following table provides the reduction potentials of the indicated reducing agent at 25 °C. For example, among sodium (Na) metal, chromium (Cr) metal, cuprous (Cu+) ion and chloride (Cl−) ion, it is Na metal that is the strongest reducing agent while Cl− ion is the weakest; said differently, Na+ ion is the weakest oxidizing agent in this list while molecule is the strongest.
Some elements and compounds can be both reducing or oxidizing agents. Hydrogen gas is a reducing agent when it reacts with non-metals and an oxidizing agent when it reacts with metals.
Hydrogen (whose reduction potential is 0.0) acts as an oxidizing agent because it accepts an electron donation from the reducing agent lithium (whose reduction potential is -3.04), which causes Li to be oxidized and Hydrogen to be reduced.
Hydrogen acts as a reducing agent because it donates its electrons to fluorine, which allows fluorine to be reduced.
Standard reduction potential
The standard reduction potential is measured under standard conditions: T = 298.15 K (25 °C, or 77 °F), a unity activity for each ion participating into the reaction, a partial pressure of 1 atm (1.013 bar) for each gas taking part into the reaction, and metals in their pure state. The standard reduction potential is defined relative to the standard hydrogen electrode (SHE) used as reference electrode, which is arbitrarily given a potential of 0.00 V. However, because these can also be referred to as "redox potentials", the terms "reduction potentials" and "oxidation potentials" are preferred by the IUPAC. The two may be explicitly distinguished by the symbols and , with .
Half cells
The relative reactivities of different half cells can be compared to predict the direction of electron flow. A higher means there is a greater tendency for reduction to occur, while a lower one means there is a greater tendency for oxidation to occur.
Any system or environment that accepts electrons from a normal hydrogen electrode is a half cell that is defined as having a positive redox potential; any system donating electrons to the hydrogen electrode is defined as having a negative redox potential. is usually expressed in volts (V) or millivolts (mV). A high positive indicates an environment that favors oxidation reaction such as free oxygen. A low negative indicates a strong reducing environment, such as free metals.
Sometimes when electrolysis is carried out in an aqueous solution, water, rather than the solute, is oxidized or reduced. For example, if an aqueous solution of NaCl is electrolyzed, water may be reduced at the cathode to produce H2(g) and OH− ions, instead of Na+ being reduced to Na(s), as occurs in the absence of water. It is the reduction potential of each species present that will determine which species will be oxidized or reduced.
Absolute reduction potentials can be determined if one knows the actual potential between electrode and electrolyte for any one reaction. Surface polarization interferes with measurements, but various sources give an estimated potential for the standard hydrogen electrode of 4.4 V to 4.6 V (the electrolyte being positive).
Half-cell equations can be combined if the one corresponding to oxidation is reversed so that each electron given by the reductant is accepted by the oxidant. In this way, the global combined equation no longer contains electrons.
Nernst equation
The and pH of a solution are related by the Nernst equation as commonly represented by a Pourbaix diagram . For a half cell equation, conventionally written as a reduction reaction (i.e., electrons accepted by an oxidant on the left side):
The half-cell standard reduction potential is given by
where is the standard Gibbs free energy change, is the number of electrons involved, and is Faraday's constant. The Nernst equation relates pH and :
where curly brackets indicate activities, and exponents are shown in the conventional manner.This equation is the equation of a straight line for as a function of pH with a slope of volt (pH has no units).
This equation predicts lower at higher pH values. This is observed for the reduction of O2 into H2O, or OH−, and for reduction of H+ into H2:
In most (if not all) of the reduction reactions involving oxyanions with a central redox-active atom, oxide anions being in excess are freed-up when the central atom is reduced. The acid-base neutralization of each oxide ion consumes 2 or one molecule as follows:
+ 2 ⇌
+ ⇌ 2
This is why protons are always engaged as reagent on the left side of the reduction reactions as can be generally observed in the table of standard reduction potential (data page).
If, in very rare instances of reduction reactions, the H+ were the products formed by a reduction reaction and thus appearing on the right side of the equation, the slope of the line would be inverse and thus positive (higher at higher pH).
An example of that would be the reductive dissolution of magnetite ( ≈ ·FeO with 2 and 1 ) to form 3 HFeO (in which dissolved iron, Fe(II), is divalent and much more soluble than Fe(III)), while releasing one :
where:
Note that the slope 0.0296 of the line is −1/2 of the −0.05916 value above, since . Note also that the value –0.0885 corresponds to –0.05916 × 3/2.
Biochemistry
Many enzymatic reactions are oxidation–reduction reactions, in which one compound is oxidized and another compound is reduced. The ability of an organism to carry out oxidation–reduction reactions depends on the oxidation–reduction state of the environment, or its reduction potential.
Strictly aerobic microorganisms are generally active at positive values, whereas strict anaerobes are generally active at negative values. Redox affects the solubility of nutrients, especially metal ions.
There are organisms that can adjust their metabolism to their environment, such as facultative anaerobes. Facultative anaerobes can be active at positive Eh values, and at negative Eh values in the presence of oxygen-bearing inorganic compounds, such as nitrates and sulfates.
In biochemistry, apparent standard reduction potentials, or formal potentials, (, noted with a prime mark in superscript) calculated at pH 7 closer to the pH of biological and intra-cellular fluids are used to more easily assess if a given biochemical redox reaction is possible. They must not be confused with the common standard reduction potentials determined under standard conditions (; ) with the concentration of each dissolved species being taken as 1 M, and thus .
Environmental chemistry
In the field of environmental chemistry, the reduction potential is used to determine if oxidizing or reducing conditions are prevalent in water or soil, and to predict the states of different chemical species in the water, such as dissolved metals. pe values in water range from -12 to 25; the levels where the water itself becomes reduced or oxidized, respectively.
The reduction potentials in natural systems often lie comparatively near one of the boundaries of the stability region of water. Aerated surface water, rivers, lakes, oceans, rainwater and acid mine water, usually have oxidizing conditions (positive potentials). In places with limitations in air supply, such as submerged soils, swamps and marine sediments, reducing conditions (negative potentials) are the norm. Intermediate values are rare and usually a temporary condition found in systems moving to higher or lower pe values.
In environmental situations, it is common to have complex non-equilibrium conditions between a large number of species, meaning that it is often not possible to make accurate and precise measurements of the reduction potential. However, it is usually possible to obtain an approximate value and define the conditions as being in the oxidizing or reducing regime.
In the soil there are two main redox constituents: 1) anorganic redox systems (mainly ox/red compounds of Fe and Mn) and measurement in water extracts; 2) natural soil samples with all microbial and root components and measurement by direct method.
Water quality
The oxido-reduction potential (ORP) can be used for the systems monitoring water quality with the advantage of a single-value measure for the disinfection potential, showing the effective activity of the disinfectant rather than the applied dose. For example, E. coli, Salmonella, Listeria and other pathogens have survival times of less than 30 seconds when the ORP is above 665 mV, compared to more than 300 seconds when ORP is below 485 mV.
A study was conducted comparing traditional parts per million (ppm) chlorination reading and ORP in Hennepin County, Minnesota. The results of this study presents arguments in favor of the inclusion of ORP above 650 mV in the local health regulation codes.
Geochemistry and mineralogy
Eh–pH (Pourbaix) diagrams are commonly used in mining and geology for assessment of the stability fields of minerals and dissolved species. Under the conditions where a mineral (solid) phase is predicted to be the most stable form of an element, these diagrams show that mineral. As the predicted results are all from thermodynamic (at equilibrium state) evaluations, these diagrams should be used with caution. Although the formation of a mineral or its dissolution may be predicted to occur under a set of conditions, the process may practically be negligible because its rate is too slow. Consequently, kinetic evaluations at the same time are necessary. Nevertheless, the equilibrium conditions can be used to evaluate the direction of spontaneous changes and the magnitude of the driving force behind them.
See also
Electrochemical potential
Electrolytic cell
Electromotive force
Fermi level
Galvanic cell
Oxygen radical absorbance capacity
Pourbaix diagram
Redox
Redox gradient
Solvated electron
Standard electrode potential
Table of standard electrode potentials
Standard apparent reduction potentials in biochemistry at pH 7
References
External links
Online Calculator Redoxpotential ("Redox Compensation")
Notes
Additional notes
External links
Redox potential exercices in biological systems
Oxidizing and Reducing Agents in Redox Reactions
Electrochemical concepts | 0.787644 | 0.995057 | 0.783751 |
Enthalpy | Enthalpy is the sum of a thermodynamic system's internal energy and the product of its pressure and volume. It is a state function in thermodynamics used in many measurements in chemical, biological, and physical systems at a constant external pressure, which is conveniently provided by the large ambient atmosphere. The pressure–volume term expresses the work that was done against constant external pressure to establish the system's physical dimensions from to some final volume (as ), i.e. to make room for it by displacing its surroundings.
The pressure-volume term is very small for solids and liquids at common conditions, and fairly small for gases. Therefore, enthalpy is a stand-in for energy in chemical systems; bond, lattice, solvation, and other chemical "energies" are actually enthalpy differences. As a state function, enthalpy depends only on the final configuration of internal energy, pressure, and volume, not on the path taken to achieve it.
In the International System of Units (SI), the unit of measurement for enthalpy is the joule. Other historical conventional units still in use include the calorie and the British thermal unit (BTU).
The total enthalpy of a system cannot be measured directly because the internal energy contains components that are unknown, not easily accessible, or are not of interest for the thermodynamic problem at hand. In practice, a change in enthalpy is the preferred expression for measurements at constant pressure, because it simplifies the description of energy transfer. When transfer of matter into or out of the system is also prevented and no electrical or mechanical (stirring shaft or lift pumping) work is done, at constant pressure the enthalpy change equals the energy exchanged with the environment by heat.
In chemistry, the standard enthalpy of reaction is the enthalpy change when reactants in their standard states ( usually ) change to products in their standard states.
This quantity is the standard heat of reaction at constant pressure and temperature, but it can be measured by calorimetric methods even if the temperature does vary during the measurement, provided that the initial and final pressure and temperature correspond to the standard state. The value does not depend on the path from initial to final state because enthalpy is a state function.
Enthalpies of chemical substances are usually listed for pressure as a standard state. Enthalpies and enthalpy changes for reactions vary as a function of temperature,
but tables generally list the standard heats of formation of substances at . For endothermic (heat-absorbing) processes, the change is a positive value; for exothermic (heat-releasing) processes it is negative.
The enthalpy of an ideal gas is independent of its pressure or volume, and depends only on its temperature, which correlates to its thermal energy. Real gases at common temperatures and pressures often closely approximate this behavior, which simplifies practical thermodynamic design and analysis.
The word "enthalpy" is derived from the Greek word enthalpein, which means to heat.
Definition
The enthalpy of a thermodynamic system is defined as the sum of its internal energy and the product of its pressure and volume:
where is the internal energy, is pressure, and is the volume of the system; is sometimes referred to as the pressure energy .
Enthalpy is an extensive property; it is proportional to the size of the system (for homogeneous systems). As intensive properties, the specific enthalpy, is referenced to a unit of mass of the system, and the molar enthalpy, where is the number of moles. For inhomogeneous systems the enthalpy is the sum of the enthalpies of the component subsystems:
where
is the total enthalpy of all the subsystems,
refers to the various subsystems,
refers to the enthalpy of each subsystem.
A closed system may lie in thermodynamic equilibrium in a static gravitational field, so that its pressure varies continuously with altitude, while, because of the equilibrium requirement, its temperature is invariant with altitude. (Correspondingly, the system's gravitational potential energy density also varies with altitude.) Then the enthalpy summation becomes an integral:
where
("rho") is density (mass per unit volume),
is the specific enthalpy (enthalpy per unit mass),
represents the enthalpy density (enthalpy per unit volume),
denotes an infinitesimally small element of volume within the system, for example, the volume of an infinitesimally thin horizontal layer.
The integral therefore represents the sum of the enthalpies of all the elements of the volume.
The enthalpy of a closed homogeneous system is its energy function with its entropy and its pressure as natural state variables which provide a differential relation for of the simplest form, derived as follows. We start from the first law of thermodynamics for closed systems for an infinitesimal process:
where
is a small amount of heat added to the system,
is a small amount of work performed by the system.
In a homogeneous system in which only reversible processes or pure heat transfer are considered, the second law of thermodynamics gives with the absolute temperature and the infinitesimal change in entropy of the system. Furthermore, if only work is done, As a result,
Adding to both sides of this expression gives
or
So
and the coefficients of the natural variable differentials and are just the single variables and .
Other expressions
The above expression of in terms of entropy and pressure may be unfamiliar to some readers. There are also expressions in terms of more directly measurable variables such as temperature and pressure:
Here is the heat capacity at constant pressure and is the coefficient of (cubic) thermal expansion:
With this expression one can, in principle, determine the enthalpy if and are known as functions of and . However the expression is more complicated than because is not a natural variable for the enthalpy .
At constant pressure, so that For an ideal gas, reduces to this form even if the process involves a pressure change, because
In a more general form, the first law describes the internal energy with additional terms involving the chemical potential and the number of particles of various types. The differential statement for then becomes
where is the chemical potential per particle for a type particle, and is the number of such particles. The last term can also be written as (with the number of moles of component added to the system and, in this case, the molar chemical potential) or as (with the mass of component added to the system and, in this case, the specific chemical potential).
Characteristic functions and natural state variables
The enthalpy, expresses the thermodynamics of a system in the energy representation. As a function of state, its arguments include both one intensive and several extensive state variables. The state variables , and are said to be the natural state variables in this representation. They are suitable for describing processes in which they are determined by factors in the surroundings. For example, when a virtual parcel of atmospheric air moves to a different altitude, the pressure surrounding it changes, and the process is often so rapid that there is too little time for heat transfer. This is the basis of the so-called adiabatic approximation that is used in meteorology.
Conjugate with the enthalpy, with these arguments, the other characteristic function of state of a thermodynamic system is its entropy, as a function, of the same list of variables of state, except that the entropy, , is replaced in the list by the enthalpy, . It expresses the entropy representation. The state variables , , and are said to be the natural state variables in this representation. They are suitable for describing processes in which they are experimentally controlled. For example, and can be controlled by allowing heat transfer, and by varying only the external pressure on the piston that sets the volume of the system.
Physical interpretation
The term is the energy of the system, and the term can be interpreted as the work that would be required to "make room" for the system if the pressure of the environment remained constant. When a system, for example, moles of a gas of volume at pressure and temperature , is created or brought to its present state from absolute zero, energy must be supplied equal to its internal energy plus , where is the work done in pushing against the ambient (atmospheric) pressure.
In physics and statistical mechanics it may be more interesting to study the internal properties of a constant-volume system and therefore the internal energy is used.
In chemistry, experiments are often conducted at constant atmospheric pressure, and the pressure–volume work represents a small, well-defined energy exchange with the atmosphere, so that is the appropriate expression for the heat of reaction. For a heat engine, the change in its enthalpy after a full cycle is equal to zero, since the final and initial state are equal.
Relationship to heat
In order to discuss the relation between the enthalpy increase and heat supply, we return to the first law for closed systems, with the physics sign convention: , where the heat is supplied by conduction, radiation, Joule heating. We apply it to the special case with a constant pressure at the surface. In this case the work is given by (where is the pressure at the surface, is the increase of the volume of the system). Cases of long range electromagnetic interaction require further state variables in their formulation, and are not considered here. In this case the first law reads:
Now,
So
If the system is under constant pressure, and consequently, the increase in enthalpy of the system is equal to the heat added:
This is why the
now-obsolete term heat content was used for enthalpy in the 19th century.
Applications
In thermodynamics, one can calculate enthalpy by determining the requirements for creating a system from "nothingness"; the mechanical work required, differs based upon the conditions that obtain during the creation of the thermodynamic system.
Energy must be supplied to remove particles from the surroundings to make space for the creation of the system, assuming that the pressure remains constant; this is the The supplied energy must also provide the change in internal energy, which includes activation energies, ionization energies, mixing energies, vaporization energies, chemical bond energies, and so forth. Together, these constitute the change in the enthalpy For systems at constant pressure, with no external work done other than the work, the change in enthalpy is the heat received by the system.
For a simple system with a constant number of particles at constant pressure, the difference in enthalpy is the maximum amount of thermal energy derivable from an isobaric thermodynamic process.
Heat of reaction
The total enthalpy of a system cannot be measured directly; the enthalpy change of a system is measured instead. Enthalpy change is defined by the following equation:
where
is the "enthalpy change",
is the final enthalpy of the system (in a chemical reaction, the enthalpy of the products or the system at equilibrium),
is the initial enthalpy of the system (in a chemical reaction, the enthalpy of the reactants).
For an exothermic reaction at constant pressure, the system's change in enthalpy, , is negative due to the products of the reaction having a smaller enthalpy than the reactants, and equals the heat released in the reaction if no electrical or shaft work is done. In other words, the overall decrease in enthalpy is achieved by the generation of heat.
Conversely, for a constant-pressure endothermic reaction, is positive and equal to the heat absorbed in the reaction.
From the definition of enthalpy as the enthalpy change at constant pressure is However, for most chemical reactions, the work term is much smaller than the internal energy change , which is approximately equal to . As an example, for the combustion of carbon monoxide and
Since the differences are so small, reaction enthalpies are often described as reaction energies and analyzed in terms of bond energies.
Specific enthalpy
The specific enthalpy of a uniform system is defined as where is the mass of the system. The SI unit for specific enthalpy is joule per kilogram. It can be expressed in other specific quantities by where is the specific internal energy, is the pressure, and is specific volume, which is equal to , where is the density.
Enthalpy changes
An enthalpy change describes the change in enthalpy observed in the constituents of a thermodynamic system when undergoing a transformation or chemical reaction. It is the difference between the enthalpy after the process has completed, i.e. the enthalpy of the products assuming that the reaction goes to completion, and the initial enthalpy of the system, namely the reactants. These processes are specified solely by their initial and final states, so that the enthalpy change for the reverse is the negative of that for the forward process.
A common standard enthalpy change is the enthalpy of formation, which has been determined for a large number of substances. Enthalpy changes are routinely measured and compiled in chemical and physical reference works, such as the CRC Handbook of Chemistry and Physics. The following is a selection of enthalpy changes commonly recognized in thermodynamics.
When used in these recognized terms the qualifier change is usually dropped and the property is simply termed enthalpy of 'process. Since these properties are often used as reference values it is very common to quote them for a standardized set of environmental parameters, or standard conditions, including:
A pressure of one atmosphere (1 atm or 1013.25 hPa) or 1 bar
A temperature of 25 °C or 298.15 K
A concentration of 1.0 M when the element or compound is present in solution
Elements or compounds in their normal physical states, i.e. standard state
For such standardized values the name of the enthalpy is commonly prefixed with the term standard, e.g. standard enthalpy of formation.
Chemical properties
Enthalpy of reaction - is defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of substance reacts completely.
Enthalpy of formation - is defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of a compound is formed from its elementary antecedents.
Enthalpy of combustion - is defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of a substance burns completely with oxygen.
Enthalpy of hydrogenation - is defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of an unsaturated compound reacts completely with an excess of hydrogen to form a saturated compound.
Enthalpy of atomization - is defined as the enthalpy change required to separate one mole of a substance into its constituent atoms completely.
Enthalpy of neutralization - is defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of water is formed when an acid and a base react.
Standard Enthalpy of solution - is defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of a solute is dissolved completely in an excess of solvent, so that the solution is at infinite dilution.
Standard enthalpy of Denaturation (biochemistry) - is defined as the enthalpy change required to denature one mole of compound.
Enthalpy of hydration - is defined as the enthalpy change observed when one mole of gaseous ions are completely dissolved in water forming one mole of aqueous ions.
Physical properties
Enthalpy of fusion - is defined as the enthalpy change required to completely change the state of one mole of substance from solid to liquid.
Enthalpy of vaporization - is defined as the enthalpy change required to completely change the state of one mole of substance from liquid to gas.
Enthalpy of sublimation - is defined as the enthalpy change required to completely change the state of one mole of substance from solid to gas.
Lattice enthalpy - is defined as the energy required to separate one mole of an ionic compound into separated gaseous ions to an infinite distance apart (meaning no force of attraction).
Enthalpy of mixing - is defined as the enthalpy change upon mixing of two (non-reacting) chemical substances.
Open systems
In thermodynamic open systems, mass (of substances) may flow in and out of the system boundaries. The first law of thermodynamics for open systems states: The increase in the internal energy of a system is equal to the amount of energy added to the system by mass flowing in and by heating, minus the amount lost by mass flowing out and in the form of work done by the system:
where is the average internal energy entering the system, and is the average internal energy leaving the system.
The region of space enclosed by the boundaries of the open system is usually called a control volume, and it may or may not correspond to physical walls. If we choose the shape of the control volume such that all flow in or out occurs perpendicular to its surface, then the flow of mass into the system performs work as if it were a piston of fluid pushing mass into the system, and the system performs work on the flow of mass out as if it were driving a piston of fluid. There are then two types of work performed: Flow work described above, which is performed on the fluid (this is also often called work), and mechanical work (shaft work), which may be performed on some mechanical device such as a turbine or pump.
These two types of work are expressed in the equation
Substitution into the equation above for the control volume (cv) yields:
The definition of enthalpy, , permits us to use this thermodynamic potential to account for both internal energy and work in fluids for open systems:
If we allow also the system boundary to move (e.g. due to moving pistons), we get a rather general form of the first law for open systems.
In terms of time derivatives, using Newton's dot notation for time derivatives, it reads:
with sums over the various places where heat is supplied, mass flows into the system, and boundaries are moving. The terms represent enthalpy flows, which can be written as
with the mass flow and the molar flow at position respectively. The term represents the rate of change of the system volume at position that results in power done by the system. The parameter represents all other forms of power done by the system such as shaft power, but it can also be, say, electric power produced by an electrical power plant.
Note that the previous expression holds true only if the kinetic energy flow rate is conserved between system inlet and outlet. Otherwise, it has to be included in the enthalpy balance. During steady-state operation of a device (see turbine, pump, and engine), the average may be set equal to zero. This yields a useful expression for the average power generation for these devices in the absence of chemical reactions:
where the angle brackets denote time averages. The technical importance of the enthalpy is directly related to its presence in the first law for open systems, as formulated above.
Diagrams
The enthalpy values of important substances can be obtained using commercial software. Practically all relevant material properties can be obtained either in tabular or in graphical form. There are many types of diagrams, such as diagrams, which give the specific enthalpy as function of temperature for various pressures, and diagrams, which give as function of for various . One of the most common diagrams is the temperature–specific entropy diagram ( diagram). It gives the melting curve and saturated liquid and vapor values together with isobars and isenthalps. These diagrams are powerful tools in the hands of the thermal engineer.
Some basic applications
The points through in the figure play a role in the discussion in this section.
{| class="wikitable" style="text-align:center"
|-
!rowspan=2|Point
! !! !! !!
|- style="background:#EEEEEE;"
| K || bar || ||
|-
| || 300 || 1 || 6.85 || 461
|-
| || 380 || 2 || 6.85 || 530
|-
| || 300 || 200 || 5.16 || 430
|-
| || 270 || 1 || 6.79 || 430
|-
| || 108 || 13 || 3.55 || 100
|-
| || 77.2 || 1 || 3.75 || 100
|-
| || 77.2 || 1 || 2.83 || 28
|-
| || 77.2 || 1 || 5.41 || 230
|}
Points and are saturated liquids, and point is a saturated gas.
Throttling
One of the simple applications of the concept of enthalpy is the so-called throttling process, also known as Joule–Thomson expansion. It concerns a steady adiabatic flow of a fluid through a flow resistance (valve, porous plug, or any other type of flow resistance) as shown in the figure. This process is very important, since it is at the heart of domestic refrigerators, where it is responsible for the temperature drop between ambient temperature and the interior of the refrigerator. It is also the final stage in many types of liquefiers.
For a steady state flow regime, the enthalpy of the system (dotted rectangle) has to be constant. Hence
Since the mass flow is constant, the specific enthalpies at the two sides of the flow resistance are the same:
that is, the enthalpy per unit mass does not change during the throttling. The consequences of this relation can be demonstrated using the diagram above.
Example 1
Point c is at 200 bar and room temperature (300 K). A Joule–Thomson expansion from 200 bar to 1 bar follows a curve of constant enthalpy of roughly 425 (not shown in the diagram) lying between the 400 and 450 isenthalps and ends in point d, which is at a temperature of about 270 K . Hence the expansion from 200 bar to 1 bar cools nitrogen from 300 K to 270 K . In the valve, there is a lot of friction, and a lot of entropy is produced, but still the final temperature is below the starting value.
Example 2
Point e is chosen so that it is on the saturated liquid line with It corresponds roughly with and Throttling from this point to a pressure of 1 bar ends in the two-phase region (point f). This means that a mixture of gas and liquid leaves the throttling valve. Since the enthalpy is an extensive parameter, the enthalpy in is equal to the enthalpy in multiplied by the liquid fraction in plus the enthalpy in multiplied by the gas fraction in So
With numbers:
so
This means that the mass fraction of the liquid in the liquid–gas mixture that leaves the throttling valve is 64%.
Compressors
A power is applied e.g. as electrical power. If the compression is adiabatic, the gas temperature goes up. In the reversible case it would be at constant entropy, which corresponds with a vertical line in the diagram. For example, compressing nitrogen from 1 bar (point a) to 2 bar (point b''') would result in a temperature increase from 300 K to 380 K. In order to let the compressed gas exit at ambient temperature , heat exchange, e.g. by cooling water, is necessary. In the ideal case the compression is isothermal. The average heat flow to the surroundings is . Since the system is in the steady state the first law gives
The minimal power needed for the compression is realized if the compression is reversible. In that case the second law of thermodynamics for open systems gives
Eliminating gives for the minimal power
For example, compressing 1 kg of nitrogen from 1 bar to 200 bar costs at least :
With the data, obtained with the diagram, we find a value of
The relation for the power can be further simplified by writing it as
With
this results in the final relation
History and etymology
The term enthalpy was coined relatively late in the history of thermodynamics, in the early 20th century. Energy was introduced in a modern sense by Thomas Young in 1802, while entropy by Rudolf Clausius in 1865. Energy uses the root of the Greek word (ergon), meaning "work", to express the idea of capacity to perform work. Entropy uses the Greek word (tropē) meaning transformation or turning. Enthalpy uses the root of the Greek word (thalpos) "warmth, heat".
The term expresses the obsolete concept of heat content, as refers to the amount of heat gained in a process at constant pressure only, but not in the general case when pressure is variable. J. W. Gibbs used the term "a heat function for constant pressure" for clarity.
Introduction of the concept of "heat content" is associated with Benoît Paul Émile Clapeyron and Rudolf Clausius (Clausius–Clapeyron relation, 1850).
The term enthalpy first appeared in print in 1909. It is attributed to Heike Kamerlingh Onnes, who most likely introduced it orally the year before, at the first meeting of the Institute of Refrigeration in Paris. It gained currency only in the 1920s, notably with the Mollier Steam Tables and Diagrams'', published in 1927.
Until the 1920s, the symbol was used, somewhat inconsistently, for "heat" in general. The definition of as strictly limited to enthalpy or "heat content at constant pressure" was formally proposed by A. W. Porter in 1922.
Notes
See also
Calorimetry
Calorimeter
Departure function
Hess's law
Isenthalpic process
Laws of thermodynamics
Stagnation enthalpy
Standard enthalpy of formation
Thermodynamic databases for pure substances
References
Bibliography
External links
State functions
Energy (physics)
Physical quantities | 0.784535 | 0.998962 | 0.783721 |