title
stringlengths
3
49
text
stringlengths
509
117k
relevans
float64
0.76
0.83
popularity
float64
0.95
1
ranking
float64
0.76
0.82
Citric acid cycle
The citric acid cycle—also known as the Krebs cycle, Szent–Györgyi–Krebs cycle, or TCA cycle (tricarboxylic acid cycle)—is a series of biochemical reactions to release the energy stored in nutrients through the oxidation of acetyl-CoA derived from carbohydrates, fats, proteins, and alcohol. The chemical energy released is available in the form of ATP. The Krebs cycle is used by organisms that respire (as opposed to organisms that ferment) to generate energy, either by anaerobic respiration or aerobic respiration. In addition, the cycle provides precursors of certain amino acids, as well as the reducing agent NADH, that are used in numerous other reactions. Its central importance to many biochemical pathways suggests that it was one of the earliest components of metabolism. Even though it is branded as a "cycle", it is not necessary for metabolites to follow only one specific route; at least three alternative segments of the citric acid cycle have been recognized. The name of this metabolic pathway is derived from the citric acid (a tricarboxylic acid, often called citrate, as the ionized form predominates at biological pH) that is consumed and then regenerated by this sequence of reactions to complete the cycle. The cycle consumes acetate (in the form of acetyl-CoA) and water, reduces NAD+ to NADH, releasing carbon dioxide. The NADH generated by the citric acid cycle is fed into the oxidative phosphorylation (electron transport) pathway. The net result of these two closely linked pathways is the oxidation of nutrients to produce usable chemical energy in the form of ATP. In eukaryotic cells, the citric acid cycle occurs in the matrix of the mitochondrion. In prokaryotic cells, such as bacteria, which lack mitochondria, the citric acid cycle reaction sequence is performed in the cytosol with the proton gradient for ATP production being across the cell's surface (plasma membrane) rather than the inner membrane of the mitochondrion. For each pyruvate molecule (from glycolysis), the overall yield of energy-containing compounds from the citric acid cycle is three NADH, one FADH2, and one GTP. Discovery Several of the components and reactions of the citric acid cycle were established in the 1930s by the research of Albert Szent-Györgyi, who received the Nobel Prize in Physiology or Medicine in 1937 specifically for his discoveries pertaining to fumaric acid, a component of the cycle. He made this discovery by studying pigeon breast muscle. Because this tissue maintains its oxidative capacity well after breaking down in the Latapie mincer and releasing in aqueous solutions, breast muscle of the pigeon was very well qualified for the study of oxidative reactions. The citric acid cycle itself was finally identified in 1937 by Hans Adolf Krebs and William Arthur Johnson while at the University of Sheffield, for which the former received the Nobel Prize for Physiology or Medicine in 1953, and for whom the cycle is sometimes named the "Krebs cycle". Overview The citric acid cycle is a metabolic pathway that connects carbohydrate, fat, and protein metabolism. The reactions of the cycle are carried out by eight enzymes that completely oxidize acetate (a two carbon molecule), in the form of acetyl-CoA, into two molecules each of carbon dioxide and water. Through catabolism of sugars, fats, and proteins, the two-carbon organic product acetyl-CoA is produced which enters the citric acid cycle. The reactions of the cycle also convert three equivalents of nicotinamide adenine dinucleotide (NAD+) into three equivalents of reduced NAD (NADH), one equivalent of flavin adenine dinucleotide (FAD) into one equivalent of FADH2, and one equivalent each of guanosine diphosphate (GDP) and inorganic phosphate (Pi) into one equivalent of guanosine triphosphate (GTP). The NADH and FADH2 generated by the citric acid cycle are, in turn, used by the oxidative phosphorylation pathway to generate energy-rich ATP. One of the primary sources of acetyl-CoA is from the breakdown of sugars by glycolysis which yield pyruvate that in turn is decarboxylated by the pyruvate dehydrogenase complex generating acetyl-CoA according to the following reaction scheme: The product of this reaction, acetyl-CoA, is the starting point for the citric acid cycle. Acetyl-CoA may also be obtained from the oxidation of fatty acids. Below is a schematic outline of the cycle: The citric acid cycle begins with the transfer of a two-carbon acetyl group from acetyl-CoA to the four-carbon acceptor compound (oxaloacetate) to form a six-carbon compound (citrate). The citrate then goes through a series of chemical transformations, losing two carboxyl groups as CO2. The carbons lost as CO2 originate from what was oxaloacetate, not directly from acetyl-CoA. The carbons donated by acetyl-CoA become part of the oxaloacetate carbon backbone after the first turn of the citric acid cycle. Loss of the acetyl-CoA-donated carbons as CO2 requires several turns of the citric acid cycle. However, because of the role of the citric acid cycle in anabolism, they might not be lost, since many citric acid cycle intermediates are also used as precursors for the biosynthesis of other molecules. Most of the electrons made available by the oxidative steps of the cycle are transferred to NAD+, forming NADH. For each acetyl group that enters the citric acid cycle, three molecules of NADH are produced. The citric acid cycle includes a series of redox reactions in mitochondria. In addition, electrons from the succinate oxidation step are transferred first to the FAD cofactor of succinate dehydrogenase, reducing it to FADH2, and eventually to ubiquinone (Q) in the mitochondrial membrane, reducing it to ubiquinol (QH2) which is a substrate of the electron transfer chain at the level of Complex III. For every NADH and FADH2 that are produced in the citric acid cycle, 2.5 and 1.5 ATP molecules are generated in oxidative phosphorylation, respectively. At the end of each cycle, the four-carbon oxaloacetate has been regenerated, and the cycle continues. Steps There are ten basic steps in the citric acid cycle, as outlined below. The cycle is continuously supplied with new carbon in the form of acetyl-CoA, entering at step 0 in the table. Two carbon atoms are oxidized to CO2, the energy from these reactions is transferred to other metabolic processes through GTP (or ATP), and as electrons in NADH and QH2. The NADH generated in the citric acid cycle may later be oxidized (donate its electrons) to drive ATP synthesis in a type of process called oxidative phosphorylation. FADH2 is covalently attached to succinate dehydrogenase, an enzyme which functions both in the citric acid cycle and the mitochondrial electron transport chain in oxidative phosphorylation. FADH2, therefore, facilitates transfer of electrons to coenzyme Q, which is the final electron acceptor of the reaction catalyzed by the succinate:ubiquinone oxidoreductase complex, also acting as an intermediate in the electron transport chain. Mitochondria in animals, including humans, possess two succinyl-CoA synthetases: one that produces GTP from GDP, and another that produces ATP from ADP. Plants have the type that produces ATP (ADP-forming succinyl-CoA synthetase). Several of the enzymes in the cycle may be loosely associated in a multienzyme protein complex within the mitochondrial matrix. The GTP that is formed by GDP-forming succinyl-CoA synthetase may be utilized by nucleoside-diphosphate kinase to form ATP (the catalyzed reaction is GTP + ADP → GDP + ATP). Products Products of the first turn of the cycle are one GTP (or ATP), three NADH, one FADH2 and two CO2. Because two acetyl-CoA molecules are produced from each glucose molecule, two cycles are required per glucose molecule. Therefore, at the end of two cycles, the products are: two GTP, six NADH, two FADH2, and four CO2. The above reactions are balanced if Pi represents the H2PO4− ion, ADP and GDP the ADP2− and GDP2− ions, respectively, and ATP and GTP the ATP3− and GTP3− ions, respectively. The total number of ATP molecules obtained after complete oxidation of one glucose in glycolysis, citric acid cycle, and oxidative phosphorylation is estimated to be between 30 and 38. Efficiency The theoretical maximum yield of ATP through oxidation of one molecule of glucose in glycolysis, citric acid cycle, and oxidative phosphorylation is 38 (assuming 3 molar equivalents of ATP per equivalent NADH and 2 ATP per FADH2). In eukaryotes, two equivalents of NADH and two equivalents of ATP are generated in glycolysis, which takes place in the cytoplasm. If transported using the glycerol phosphate shuttle rather than the malate–aspartate shuttle, transport of two of these equivalents of NADH into the mitochondria effectively consumes two equivalents of ATP, thus reducing the net production of ATP to 36. Furthermore, inefficiencies in oxidative phosphorylation due to leakage of protons across the mitochondrial membrane and slippage of the ATP synthase/proton pump commonly reduces the ATP yield from NADH and FADH2 to less than the theoretical maximum yield. The observed yields are, therefore, closer to ~2.5 ATP per NADH and ~1.5 ATP per FADH2, further reducing the total net production of ATP to approximately 30. An assessment of the total ATP yield with newly revised proton-to-ATP ratios provides an estimate of 29.85 ATP per glucose molecule. Variation While the citric acid cycle is in general highly conserved, there is significant variability in the enzymes found in different taxa (note that the diagrams on this page are specific to the mammalian pathway variant). Some differences exist between eukaryotes and prokaryotes. The conversion of D-threo-isocitrate to 2-oxoglutarate is catalyzed in eukaryotes by the NAD+-dependent EC 1.1.1.41, while prokaryotes employ the NADP+-dependent EC 1.1.1.42. Similarly, the conversion of (S)-malate to oxaloacetate is catalyzed in eukaryotes by the NAD+-dependent EC 1.1.1.37, while most prokaryotes utilize a quinone-dependent enzyme, EC 1.1.5.4. A step with significant variability is the conversion of succinyl-CoA to succinate. Most organisms utilize EC 6.2.1.5, succinate–CoA ligase (ADP-forming) (despite its name, the enzyme operates in the pathway in the direction of ATP formation). In mammals a GTP-forming enzyme, succinate–CoA ligase (GDP-forming) (EC 6.2.1.4) also operates. The level of utilization of each isoform is tissue dependent. In some acetate-producing bacteria, such as Acetobacter aceti, an entirely different enzyme catalyzes this conversion – EC 2.8.3.18, succinyl-CoA:acetate CoA-transferase. This specialized enzyme links the TCA cycle with acetate metabolism in these organisms. Some bacteria, such as Helicobacter pylori, employ yet another enzyme for this conversion – succinyl-CoA:acetoacetate CoA-transferase (EC 2.8.3.5). Some variability also exists at the previous step – the conversion of 2-oxoglutarate to succinyl-CoA. While most organisms utilize the ubiquitous NAD+-dependent 2-oxoglutarate dehydrogenase, some bacteria utilize a ferredoxin-dependent 2-oxoglutarate synthase (EC 1.2.7.3). Other organisms, including obligately autotrophic and methanotrophic bacteria and archaea, bypass succinyl-CoA entirely, and convert 2-oxoglutarate to succinate via succinate semialdehyde, using EC 4.1.1.71, 2-oxoglutarate decarboxylase, and EC 1.2.1.79, succinate-semialdehyde dehydrogenase. In cancer, there are substantial metabolic derangements that occur to ensure the proliferation of tumor cells, and consequently metabolites can accumulate which serve to facilitate tumorigenesis, dubbed oncometabolites. Among the best characterized oncometabolites is 2-hydroxyglutarate which is produced through a heterozygous gain-of-function mutation (specifically a neomorphic one) in isocitrate dehydrogenase (IDH) (which under normal circumstances catalyzes the oxidation of isocitrate to oxalosuccinate, which then spontaneously decarboxylates to alpha-ketoglutarate, as discussed above; in this case an additional reduction step occurs after the formation of alpha-ketoglutarate via NADPH to yield 2-hydroxyglutarate), and hence IDH is considered an oncogene. Under physiological conditions, 2-hydroxyglutarate is a minor product of several metabolic pathways as an error but readily converted to alpha-ketoglutarate via hydroxyglutarate dehydrogenase enzymes (L2HGDH and D2HGDH) but does not have a known physiologic role in mammalian cells; of note, in cancer, 2-hydroxyglutarate is likely a terminal metabolite as isotope labelling experiments of colorectal cancer cell lines show that its conversion back to alpha-ketoglutarate is too low to measure. In cancer, 2-hydroxyglutarate serves as a competitive inhibitor for a number of enzymes that facilitate reactions via alpha-ketoglutarate in alpha-ketoglutarate-dependent dioxygenases. This mutation results in several important changes to the metabolism of the cell. For one thing, because there is an extra NADPH-catalyzed reduction, this can contribute to depletion of cellular stores of NADPH and also reduce levels of alpha-ketoglutarate available to the cell. In particular, the depletion of NADPH is problematic because NADPH is highly compartmentalized and cannot freely diffuse between the organelles in the cell. It is produced largely via the pentose phosphate pathway in the cytoplasm. The depletion of NADPH results in increased oxidative stress within the cell as it is a required cofactor in the production of GSH, and this oxidative stress can result in DNA damage. There are also changes on the genetic and epigenetic level through the function of histone lysine demethylases (KDMs) and ten-eleven translocation (TET) enzymes; ordinarily TETs hydroxylate 5-methylcytosines to prime them for demethylation. However, in the absence of alpha-ketoglutarate this cannot be done and there is hence hypermethylation of the cell's DNA, serving to promote epithelial-mesenchymal transition (EMT) and inhibit cellular differentiation. A similar phenomenon is observed for the Jumonji C family of KDMs which require a hydroxylation to perform demethylation at the epsilon-amino methyl group. Additionally, the inability of prolyl hydroxylases to catalyze reactions results in stabilization of hypoxia-inducible factor alpha, which is necessary to promote degradation of the latter (as under conditions of low oxygen there will not be adequate substrate for hydroxylation). This results in a pseudohypoxic phenotype in the cancer cell that promotes angiogenesis, metabolic reprogramming, cell growth, and migration. Regulation Allosteric regulation by metabolites. The regulation of the citric acid cycle is largely determined by product inhibition and substrate availability. If the cycle were permitted to run unchecked, large amounts of metabolic energy could be wasted in overproduction of reduced coenzyme such as NADH and ATP. The major eventual substrate of the cycle is ADP which gets converted to ATP. A reduced amount of ADP causes accumulation of precursor NADH which in turn can inhibit a number of enzymes. NADH, a product of all dehydrogenases in the citric acid cycle with the exception of succinate dehydrogenase, inhibits pyruvate dehydrogenase, isocitrate dehydrogenase, α-ketoglutarate dehydrogenase, and also citrate synthase. Acetyl-coA inhibits pyruvate dehydrogenase, while succinyl-CoA inhibits alpha-ketoglutarate dehydrogenase and citrate synthase. When tested in vitro with TCA enzymes, ATP inhibits citrate synthase and α-ketoglutarate dehydrogenase; however, ATP levels do not change more than 10% in vivo between rest and vigorous exercise. There is no known allosteric mechanism that can account for large changes in reaction rate from an allosteric effector whose concentration changes less than 10%. Citrate is used for feedback inhibition, as it inhibits phosphofructokinase, an enzyme involved in glycolysis that catalyses formation of fructose 1,6-bisphosphate, a precursor of pyruvate. This prevents a constant high rate of flux when there is an accumulation of citrate and a decrease in substrate for the enzyme. Regulation by calcium. Calcium is also used as a regulator in the citric acid cycle. Calcium levels in the mitochondrial matrix can reach up to the tens of micromolar levels during cellular activation. It activates pyruvate dehydrogenase phosphatase which in turn activates the pyruvate dehydrogenase complex. Calcium also activates isocitrate dehydrogenase and α-ketoglutarate dehydrogenase. This increases the reaction rate of many of the steps in the cycle, and therefore increases flux throughout the pathway. Transcriptional regulation. There is a link between intermediates of the citric acid cycle and the regulation of hypoxia-inducible factors (HIF). HIF plays a role in the regulation of oxygen homeostasis, and is a transcription factor that targets angiogenesis, vascular remodeling, glucose utilization, iron transport and apoptosis. HIF is synthesized constitutively, and hydroxylation of at least one of two critical proline residues mediates their interaction with the von Hippel Lindau E3 ubiquitin ligase complex, which targets them for rapid degradation. This reaction is catalysed by prolyl 4-hydroxylases. Fumarate and succinate have been identified as potent inhibitors of prolyl hydroxylases, thus leading to the stabilisation of HIF. Major metabolic pathways converging on the citric acid cycle Several catabolic pathways converge on the citric acid cycle. Most of these reactions add intermediates to the citric acid cycle, and are therefore known as anaplerotic reactions, from the Greek meaning to "fill up". These increase the amount of acetyl CoA that the cycle is able to carry, increasing the mitochondrion's capability to carry out respiration if this is otherwise a limiting factor. Processes that remove intermediates from the cycle are termed "cataplerotic" reactions. In this section and in the next, the citric acid cycle intermediates are indicated in italics to distinguish them from other substrates and end-products. Pyruvate molecules produced by glycolysis are actively transported across the inner mitochondrial membrane, and into the matrix. Here they can be oxidized and combined with coenzyme A to form CO2, acetyl-CoA, and NADH, as in the normal cycle. However, it is also possible for pyruvate to be carboxylated by pyruvate carboxylase to form oxaloacetate. This latter reaction "fills up" the amount of oxaloacetate in the citric acid cycle, and is therefore an anaplerotic reaction, increasing the cycle's capacity to metabolize acetyl-CoA when the tissue's energy needs (e.g. in muscle) are suddenly increased by activity. In the citric acid cycle all the intermediates (e.g. citrate, iso-citrate, alpha-ketoglutarate, succinate, fumarate, malate, and oxaloacetate) are regenerated during each turn of the cycle. Adding more of any of these intermediates to the mitochondrion therefore means that that additional amount is retained within the cycle, increasing all the other intermediates as one is converted into the other. Hence the addition of any one of them to the cycle has an anaplerotic effect, and its removal has a cataplerotic effect. These anaplerotic and cataplerotic reactions will, during the course of the cycle, increase or decrease the amount of oxaloacetate available to combine with acetyl-CoA to form citric acid. This in turn increases or decreases the rate of ATP production by the mitochondrion, and thus the availability of ATP to the cell. Acetyl-CoA, on the other hand, derived from pyruvate oxidation, or from the beta-oxidation of fatty acids, is the only fuel to enter the citric acid cycle. With each turn of the cycle one molecule of acetyl-CoA is consumed for every molecule of oxaloacetate present in the mitochondrial matrix, and is never regenerated. It is the oxidation of the acetate portion of acetyl-CoA that produces CO2 and water, with the energy thus released captured in the form of ATP. The three steps of beta-oxidation resemble the steps that occur in the production of oxaloacetate from succinate in the TCA cycle. Acyl-CoA is oxidized to trans-Enoyl-CoA while FAD is reduced to FADH2, which is similar to the oxidation of succinate to fumarate. Following, trans-enoyl-CoA is hydrated across the double bond to beta-hydroxyacyl-CoA, just like fumarate is hydrated to malate. Lastly, beta-hydroxyacyl-CoA is oxidized to beta-ketoacyl-CoA while NAD+ is reduced to NADH, which follows the same process as the oxidation of malate to oxaloacetate. In the liver, the carboxylation of cytosolic pyruvate into intra-mitochondrial oxaloacetate is an early step in the gluconeogenic pathway which converts lactate and de-aminated alanine into glucose, under the influence of high levels of glucagon and/or epinephrine in the blood. Here the addition of oxaloacetate to the mitochondrion does not have a net anaplerotic effect, as another citric acid cycle intermediate (malate) is immediately removed from the mitochondrion to be converted into cytosolic oxaloacetate, which is ultimately converted into glucose, in a process that is almost the reverse of glycolysis. In protein catabolism, proteins are broken down by proteases into their constituent amino acids. Their carbon skeletons (i.e. the de-aminated amino acids) may either enter the citric acid cycle as intermediates (e.g. alpha-ketoglutarate derived from glutamate or glutamine), having an anaplerotic effect on the cycle, or, in the case of leucine, isoleucine, lysine, phenylalanine, tryptophan, and tyrosine, they are converted into acetyl-CoA which can be burned to CO2 and water, or used to form ketone bodies, which too can only be burned in tissues other than the liver where they are formed, or excreted via the urine or breath. These latter amino acids are therefore termed "ketogenic" amino acids, whereas those that enter the citric acid cycle as intermediates can only be cataplerotically removed by entering the gluconeogenic pathway via malate which is transported out of the mitochondrion to be converted into cytosolic oxaloacetate and ultimately into glucose. These are the so-called "glucogenic" amino acids. De-aminated alanine, cysteine, glycine, serine, and threonine are converted to pyruvate and can consequently either enter the citric acid cycle as oxaloacetate (an anaplerotic reaction) or as acetyl-CoA to be disposed of as CO2 and water. In fat catabolism, triglycerides are hydrolyzed to break them into fatty acids and glycerol. In the liver the glycerol can be converted into glucose via dihydroxyacetone phosphate and glyceraldehyde-3-phosphate by way of gluconeogenesis. In skeletal muscle, glycerol is used in glycolysis by converting glycerol into glycerol-3-phosphate, then into dihydroxyacetone phosphate (DHAP), then into glyceraldehyde-3-phosphate. In many tissues, especially heart and skeletal muscle tissue, fatty acids are broken down through a process known as beta oxidation, which results in the production of mitochondrial acetyl-CoA, which can be used in the citric acid cycle. Beta oxidation of fatty acids with an odd number of methylene bridges produces propionyl-CoA, which is then converted into succinyl-CoA and fed into the citric acid cycle as an anaplerotic intermediate. The total energy gained from the complete breakdown of one (six-carbon) molecule of glucose by glycolysis, the formation of 2 acetyl-CoA molecules, their catabolism in the citric acid cycle, and oxidative phosphorylation equals about 30 ATP molecules, in eukaryotes. The number of ATP molecules derived from the beta oxidation of a 6 carbon segment of a fatty acid chain, and the subsequent oxidation of the resulting 3 molecules of acetyl-CoA is 40. Citric acid cycle intermediates serve as substrates for biosynthetic processes In this subheading, as in the previous one, the TCA intermediates are identified by italics. Several of the citric acid cycle intermediates are used for the synthesis of important compounds, which will have significant cataplerotic effects on the cycle. Acetyl-CoA cannot be transported out of the mitochondrion. To obtain cytosolic acetyl-CoA, citrate is removed from the citric acid cycle and carried across the inner mitochondrial membrane into the cytosol. There it is cleaved by ATP citrate lyase into acetyl-CoA and oxaloacetate. The oxaloacetate is returned to mitochondrion as malate (and then converted back into oxaloacetate to transfer more acetyl-CoA out of the mitochondrion). The cytosolic acetyl-CoA is used for fatty acid synthesis and the production of cholesterol. Cholesterol can, in turn, be used to synthesize the steroid hormones, bile salts, and vitamin D. The carbon skeletons of many non-essential amino acids are made from citric acid cycle intermediates. To turn them into amino acids the alpha keto-acids formed from the citric acid cycle intermediates have to acquire their amino groups from glutamate in a transamination reaction, in which pyridoxal phosphate is a cofactor. In this reaction the glutamate is converted into alpha-ketoglutarate, which is a citric acid cycle intermediate. The intermediates that can provide the carbon skeletons for amino acid synthesis are oxaloacetate which forms aspartate and asparagine; and alpha-ketoglutarate which forms glutamine, proline, and arginine. Of these amino acids, aspartate and glutamine are used, together with carbon and nitrogen atoms from other sources, to form the purines that are used as the bases in DNA and RNA, as well as in ATP, AMP, GTP, NAD, FAD and CoA. The pyrimidines are partly assembled from aspartate (derived from oxaloacetate). The pyrimidines, thymine, cytosine and uracil, form the complementary bases to the purine bases in DNA and RNA, and are also components of CTP, UMP, UDP and UTP. The majority of the carbon atoms in the porphyrins come from the citric acid cycle intermediate, succinyl-CoA. These molecules are an important component of the hemoproteins, such as hemoglobin, myoglobin and various cytochromes. During gluconeogenesis mitochondrial oxaloacetate is reduced to malate which is then transported out of the mitochondrion, to be oxidized back to oxaloacetate in the cytosol. Cytosolic oxaloacetate is then decarboxylated to phosphoenolpyruvate by phosphoenolpyruvate carboxykinase, which is the rate limiting step in the conversion of nearly all the gluconeogenic precursors (such as the glucogenic amino acids and lactate) into glucose by the liver and kidney. Because the citric acid cycle is involved in both catabolic and anabolic processes, it is known as an amphibolic pathway. Evan M.W.Duo Glucose feeds the TCA cycle via circulating lactate The metabolic role of lactate is well recognized as a fuel for tissues, mitochondrial cytopathies such as DPH Cytopathy, and the scientific field of oncology (tumors). In the classical Cori cycle, muscles produce lactate which is then taken up by the liver for gluconeogenesis. New studies suggest that lactate can be used as a source of carbon for the TCA cycle. Evolution It is believed that components of the citric acid cycle were derived from anaerobic bacteria, and that the TCA cycle itself may have evolved more than once. It may even predate biosis: the substrates appear to undergo most of the reactions spontaneously in the presence of persulfate radicals. Theoretically, several alternatives to the TCA cycle exist; however, the TCA cycle appears to be the most efficient. If several TCA alternatives had evolved independently, they all appear to have converged to the TCA cycle. See also Calvin cycle Glyoxylate cycle Reverse (reductive) Krebs cycle Krebs cycle (simple English) References External links An animation of the citric acid cycle at Smith College Citric acid cycle variants at MetaCyc Pathways connected to the citric acid cycle at Kyoto Encyclopedia of Genes and Genomes metpath: Interactive representation of the citric acid cycle Cellular respiration Exercise biochemistry Exercise physiology Metabolic pathways 1937 in biology
0.772593
0.999456
0.772172
Metallurgy
Metallurgy is a domain of materials science and engineering that studies the physical and chemical behavior of metallic elements, their inter-metallic compounds, and their mixtures, which are known as alloys. Metallurgy encompasses both the science and the technology of metals, including the production of metals and the engineering of metal components used in products for both consumers and manufacturers. Metallurgy is distinct from the craft of metalworking. Metalworking relies on metallurgy in a similar manner to how medicine relies on medical science for technical advancement. A specialist practitioner of metallurgy is known as a metallurgist. The science of metallurgy is further subdivided into two broad categories: chemical metallurgy and physical metallurgy. Chemical metallurgy is chiefly concerned with the reduction and oxidation of metals, and the chemical performance of metals. Subjects of study in chemical metallurgy include mineral processing, the extraction of metals, thermodynamics, electrochemistry, and chemical degradation (corrosion). In contrast, physical metallurgy focuses on the mechanical properties of metals, the physical properties of metals, and the physical performance of metals. Topics studied in physical metallurgy include crystallography, material characterization, mechanical metallurgy, phase transformations, and failure mechanisms. Historically, metallurgy has predominately focused on the production of metals. Metal production begins with the processing of ores to extract the metal, and includes the mixture of metals to make alloys. Metal alloys are often a blend of at least two different metallic elements. However, non-metallic elements are often added to alloys in order to achieve properties suitable for an application. The study of metal production is subdivided into ferrous metallurgy (also known as black metallurgy) and non-ferrous metallurgy, also known as colored metallurgy. Ferrous metallurgy involves processes and alloys based on iron, while non-ferrous metallurgy involves processes and alloys based on other metals. The production of ferrous metals accounts for 95% of world metal production. Modern metallurgists work in both emerging and traditional areas as part of an interdisciplinary team alongside material scientists and other engineers. Some traditional areas include mineral processing, metal production, heat treatment, failure analysis, and the joining of metals (including welding, brazing, and soldering). Emerging areas for metallurgists include nanotechnology, superconductors, composites, biomedical materials, electronic materials (semiconductors) and surface engineering. Etymology and pronunciation Metallurgy derives from the Ancient Greek , , "worker in metal", from , , "mine, metal" + , , "work" The word was originally an alchemist's term for the extraction of metals from minerals, the ending -urgy signifying a process, especially manufacturing: it was discussed in this sense in the 1797 Encyclopædia Britannica. In the late 19th century, metallurgy's definition was extended to the more general scientific study of metals, alloys, and related processes. In English, the pronunciation is the more common one in the United Kingdom. The pronunciation is the more common one in the United States US and is the first-listed variant in various American dictionaries, including Merriam-Webster Collegiate and American Heritage. History The earliest recorded metal employed by humans appears to be gold, which can be found either free or "native". Small amounts of natural gold have been found in Spanish caves dating to the late Paleolithic period, 40,000 BC. Silver, copper, tin and meteoric iron can also be found in native form, allowing a limited amount of metalworking in early cultures. Early cold metallurgy, not melted from mineral, using native copper has been documented at sites in Anatolia and at the site of Tell Maghzaliyah in Iraq, dating from the 7th/6th millennia BC. The earliest archaeological support of smelting (hot metallurgy) in Eurasia is found in the Balkans and Carpathian Mountains, as evidenced by findings of objects made by metal casting and smelting dated to around 6000-5000 BC. Certain metals, such as tin, lead, and copper can be recovered from their ores by simply heating the rocks in a fire or blast furnace in a process known as smelting. The first evidence of copper smelting, dating from the 6th millennium BC, has been found at archaeological sites in Majdanpek, Jarmovac and Pločnik, in present-day Serbia. The site of Pločnik has produced a smelted copper axe dating from 5,500 BC, belonging to the Vinča culture. The Balkans and adjacent Carpathian region were the location of major Chalcolithic cultures including Vinča, Varna, Karanovo, Gumelnița and Hamangia, which are often grouped together under the name of 'Old Europe'. With the Carpatho-Balkan region described as the 'earliest metallurgical province in Eurasia', its scale and technical quality of metal production in the 6th–5th millennia BC totally overshadowed that of any other contemporary production centre. The earliest documented use of lead (possibly native or smelted) in the Near East dates from the 6th millennium BC, is from the late Neolithic settlements of Yarim Tepe and Arpachiyah in Iraq. The artifacts suggest that lead smelting may have predated copper smelting. Metallurgy of lead has also been found in the Balkans during the same period. Copper smelting is documented at sites in Anatolia and at the site of Tal-i Iblis in southeastern Iran from c. 5000 BC. Copper smelting is first documented in the Delta region of northern Egypt in c. 4000 BC, associated with the Maadi culture. This represents the earliest evidence for smelting in Africa. The Varna Necropolis, Bulgaria, is a burial site located in the western industrial zone of Varna, approximately 4 km from the city centre, internationally considered one of the key archaeological sites in world prehistory. The oldest gold treasure in the world, dating from 4,600 BC to 4,200 BC, was discovered at the site. The gold piece dating from 4,500 BC, found in 2019 in Durankulak, near Varna is another important example. Other signs of early metals are found from the third millennium BC in Palmela, Portugal, Los Millares, Spain, and Stonehenge, United Kingdom. The precise beginnings, however, have not be clearly ascertained and new discoveries are both continuous and ongoing. In approximately 1900 BC, ancient iron smelting sites existed in Tamil Nadu. In the Near East, about 3,500 BC, it was discovered that by combining copper and tin, a superior metal could be made, an alloy called bronze. This represented a major technological shift known as the Bronze Age. The extraction of iron from its ore into a workable metal is much more difficult than for copper or tin. The process appears to have been invented by the Hittites in about 1200 BC, beginning the Iron Age. The secret of extracting and working iron was a key factor in the success of the Philistines. Historical developments in ferrous metallurgy can be found in a wide variety of past cultures and civilizations. This includes the ancient and medieval kingdoms and empires of the Middle East and Near East, ancient Iran, ancient Egypt, ancient Nubia, and Anatolia in present-day Turkey, Ancient Nok, Carthage, the Celts, Greeks and Romans of ancient Europe, medieval Europe, ancient and medieval China, ancient and medieval India, ancient and medieval Japan, amongst others. A 16th century book by Georg Agricola, De re metallica, describes the highly developed and complex processes of mining metal ores, metal extraction, and metallurgy of the time. Agricola has been described as the "father of metallurgy". Extraction Extractive metallurgy is the practice of removing valuable metals from an ore and refining the extracted raw metals into a purer form. In order to convert a metal oxide or sulphide to a purer metal, the ore must be reduced physically, chemically, or electrolytically. Extractive metallurgists are interested in three primary streams: feed, concentrate (metal oxide/sulphide) and tailings (waste). After mining, large pieces of the ore feed are broken through crushing or grinding in order to obtain particles small enough, where each particle is either mostly valuable or mostly waste. Concentrating the particles of value in a form supporting separation enables the desired metal to be removed from waste products. Mining may not be necessary, if the ore body and physical environment are conducive to leaching. Leaching dissolves minerals in an ore body and results in an enriched solution. The solution is collected and processed to extract valuable metals. Ore bodies often contain more than one valuable metal. Tailings of a previous process may be used as a feed in another process to extract a secondary product from the original ore. Additionally, a concentrate may contain more than one valuable metal. That concentrate would then be processed to separate the valuable metals into individual constituents. Metal and its alloys Much effort has been placed on understanding iron–carbon alloy system, which includes steels and cast irons. Plain carbon steels (those that contain essentially only carbon as an alloying element) are used in low-cost, high-strength applications, where neither weight nor corrosion are a major concern. Cast irons, including ductile iron, are also part of the iron-carbon system. Iron-Manganese-Chromium alloys (Hadfield-type steels) are also used in non-magnetic applications such as directional drilling. Other engineering metals include aluminium, chromium, copper, magnesium, nickel, titanium, zinc, and silicon. These metals are most often used as alloys with the noted exception of silicon, which is not a metal. Other forms include: Stainless steel, particularly Austenitic stainless steels, galvanized steel, nickel alloys, titanium alloys, or occasionally copper alloys are used, where resistance to corrosion is important. Aluminium alloys and magnesium alloys are commonly used, when a lightweight strong part is required such as in automotive and aerospace applications. Copper-nickel alloys (such as Monel) are used in highly corrosive environments and for non-magnetic applications. Nickel-based superalloys like Inconel are used in high-temperature applications such as gas turbines, turbochargers, pressure vessels, and heat exchangers. For extremely high temperatures, single crystal alloys are used to minimize creep. In modern electronics, high purity single crystal silicon is essential for metal-oxide-silicon transistors (MOS) and integrated circuits. Production In production engineering, metallurgy is concerned with the production of metallic components for use in consumer or engineering products. This involves production of alloys, shaping, heat treatment and surface treatment of product. The task of the metallurgist is to achieve balance between material properties, such as cost, weight, strength, toughness, hardness, corrosion, fatigue resistance and performance in temperature extremes. To achieve this goal, the operating environment must be carefully considered. Determining the hardness of the metal using the Rockwell, Vickers, and Brinell hardness scales is a commonly used practice that helps better understand the metal's elasticity and plasticity for different applications and production processes. In a saltwater environment, most ferrous metals and some non-ferrous alloys corrode quickly. Metals exposed to cold or cryogenic conditions may undergo a ductile to brittle transition and lose their toughness, becoming more brittle and prone to cracking. Metals under continual cyclic loading can suffer from metal fatigue. Metals under constant stress at elevated temperatures can creep. Metalworking processes Casting – molten metal is poured into a shaped mold. Variants of casting include sand casting, investment casting, also called the lost wax process, die casting, centrifugal casting, both vertical and horizontal, and continuous castings. Each of these forms has advantages for certain metals and applications considering factors like magnetism and corrosion. Forging – a red-hot billet is hammered into shape. Rolling – a billet is passed through successively narrower rollers to create a sheet. Extrusion – a hot and malleable metal is forced under pressure through a die, which shapes it before it cools. Machining – lathes, milling machines and drills cut the cold metal to shape. Sintering – a powdered metal is heated in a non-oxidizing environment after being compressed into a die. Fabrication – sheets of metal are cut with guillotines or gas cutters and bent and welded into structural shape. Laser cladding – metallic powder is blown through a movable laser beam (e.g. mounted on a NC 5-axis machine). The resulting melted metal reaches a substrate to form a melt pool. By moving the laser head, it is possible to stack the tracks and build up a three-dimensional piece. 3D printing – Sintering or melting amorphous powder metal in a 3D space to make any object to shape. Cold-working processes, in which the product's shape is altered by rolling, fabrication or other processes, while the product is cold, can increase the strength of the product by a process called work hardening. Work hardening creates microscopic defects in the metal, which resist further changes of shape. Heat treatment Metals can be heat-treated to alter the properties of strength, ductility, toughness, hardness and resistance to corrosion. Common heat treatment processes include annealing, precipitation strengthening, quenching, and tempering: Annealing process softens the metal by heating it and then allowing it to cool very slowly, which gets rid of stresses in the metal and makes the grain structure large and soft-edged so that, when the metal is hit or stressed it dents or perhaps bends, rather than breaking; it is also easier to sand, grind, or cut annealed metal. Quenching is the process of cooling metal very quickly after heating, thus "freezing" the metal's molecules in the very hard martensite form, which makes the metal harder. Tempering relieves stresses in the metal that were caused by the hardening process; tempering makes the metal less hard while making it better able to sustain impacts without breaking. Often, mechanical and thermal treatments are combined in what are known as thermo-mechanical treatments for better properties and more efficient processing of materials. These processes are common to high-alloy special steels, superalloys and titanium alloys. Plating Electroplating is a chemical surface-treatment technique. It involves bonding a thin layer of another metal such as gold, silver, chromium or zinc to the surface of the product. This is done by selecting the coating material electrolyte solution, which is the material that is going to coat the workpiece (gold, silver, zinc). There needs to be two electrodes of different materials: one the same material as the coating material and one that is receiving the coating material. Two electrodes are electrically charged and the coating material is stuck to the work piece. It is used to reduce corrosion as well as to improve the product's aesthetic appearance. It is also used to make inexpensive metals look like the more expensive ones (gold, silver). Shot peening Shot peening is a cold working process used to finish metal parts. In the process of shot peening, small round shot is blasted against the surface of the part to be finished. This process is used to prolong the product life of the part, prevent stress corrosion failures, and also prevent fatigue. The shot leaves small dimples on the surface like a peen hammer does, which cause compression stress under the dimple. As the shot media strikes the material over and over, it forms many overlapping dimples throughout the piece being treated. The compression stress in the surface of the material strengthens the part and makes it more resistant to fatigue failure, stress failures, corrosion failure, and cracking. Thermal spraying Thermal spraying techniques are another popular finishing option, and often have better high temperature properties than electroplated coatings. Thermal spraying, also known as a spray welding process, is an industrial coating process that consists of a heat source (flame or other) and a coating material that can be in a powder or wire form, which is melted then sprayed on the surface of the material being treated at a high velocity. The spray treating process is known by many different names such as HVOF (High Velocity Oxygen Fuel), plasma spray, flame spray, arc spray and metalizing. Electroless deposition Electroless deposition (ED) or electroless plating is defined as the autocatalytic process through which metals and metal alloys are deposited onto nonconductive surfaces. These nonconductive surfaces include plastics, ceramics, and glass etc., which can then become decorative, anti-corrosive, and conductive depending on their final functions. Electroless deposition is a chemical processes that create metal coatings on various materials by autocatalytic chemical reduction of metal cations in a liquid bath. Characterization Metallurgists study the microscopic and macroscopic structure of metals using metallography, a technique invented by Henry Clifton Sorby. In metallography, an alloy of interest is ground flat and polished to a mirror finish. The sample can then be etched to reveal the microstructure and macrostructure of the metal. The sample is then examined in an optical or electron microscope, and the image contrast provides details on the composition, mechanical properties, and processing history. Crystallography, often using diffraction of x-rays or electrons, is another valuable tool available to the modern metallurgist. Crystallography allows identification of unknown materials and reveals the crystal structure of the sample. Quantitative crystallography can be used to calculate the amount of phases present as well as the degree of strain to which a sample has been subjected. See also Adrien Chenot Archaeometallurgy Blacksmith CALPHAD Carbonyl metallurgy Cupellation Experimental archaeometallurgy Forging Goldbeating Gold phosphine complex Metallurgical failure analysis Metalworking Mineral industry Pyrometallurgy Welding References Metals
0.773587
0.998113
0.772127
Discovery science
Discovery science (also known as discovery-based science) is a scientific methodology which aims to find new patterns, correlations, and form hypotheses through the analysis of large-scale experimental data. The term “discovery science” encompasses various fields of study, including basic, translational, and computational science and research. Discovery-based methodologies are commonly contrasted with traditional scientific practice, the latter involving hypothesis formation before experimental data is closely examined. Discovery science involves the process of inductive reasoning or using observations to make generalisations, and can be applied to a range of science-related fields, e.g., medicine, proteomics, hydrology, psychology, and psychiatry. Overview Purpose Discovery science places an emphasis on 'basic' discovery, which can fundamentally change the status quo. For example, in the early years of water resources research, the use of discovery science was demonstrated by seeking to elucidate phenomena that was, until that point, unexplained. It did not matter how unusual these ideas may have been perceived to be. In this sense, discovery science is based on the attitude that ‘‘we must not allow our concepts of the earth, in so far as they transcend the reach of observation, to root themselves so deeply and so firmly in our minds that the process of uprooting them causes mental discomfort" (as stated by Davis in 1926). For discovery science to be utilised, there is a need to revert to creating and testing genuine hypotheses, rather than focusing on praising concepts that are already familiar. While researchers commonly feel that new hypotheses will naturally emerge inductively from curiosity in the relevant field, it should be acknowledged that hypotheses can be generated by models. Additionally, deductive testing must involve field observation, so that imperfect answers can be substituted with questions that are more clearly defined. Tools Hypothesis-driven studies can be transformed into discovery-driven studies with the help of newly available tools and technology-driven life science research. These tools have allowed for new questions to be asked, and new paradigms to be considered, particularly in the field of biology. However, some of these required tools are limited in the sense that they are inaccessible or too costly because the related technology is still being developed. Data mining is the most common tool used in discovery science, and is applied to data from diverse fields of study such as DNA analysis, climate modelling, nuclear reaction modelling, and others. The use of data mining in discovery science follows a general trend of increasing use of computers and computational theory in all fields of science, and newer methods of data mining employ specialised machine learning algorithms for automated hypothesis forming and automated theorem proving. Applications While computational methods are gaining interest, there is a decline in efforts to support critical care through basic and translational science, i.e., forms of discovery science which are essential for advancing understanding of pathophysiology. A loss of interest in basic and translational science may lead to a failure to discover and develop new therapies, which could have an impact on the critically ill. Within critical care, there is an aim to renew emphasis on basic, translational science through platforms such as medical journals and conferences, as well as the critical care medical curricula. Advances in discovery-based science thereby underlie key discoveries and development in medicine, constituting a 'pipeline' for leading-edge medical development. Medicine According to the AACR Cancer Progress Report 2021, discovery science has the potential to drive clinical breakthroughs. Since discovery science underlies key discoveries and development of new therapies for medicine, it remains important for advancing critical care. Numerous discoveries have increased life span and productivity, and decreased health-related costs, thereby revolutionising medical care. Resultantly, return on investment for discovery science has proven to be high. For example, its combination of computational methods with knowledge on inflammatory and genomic pathways has resulted in optimised clinical trials. Ultimately, discovery science is currently enabling a transition to the era of personalised medicine for treating complex syndromes, e.g., sepsis and ARDS. With a robust infrastructure, discovery science can resultantly revolutionise medical care and biological research. Genomics Discovery science has converged with clinical medicine and cancer genomics, and this convergence has been accelerated by recent advances in genome technologies and genomic information. The effect of cancer genomics has been noticeable in every area of cancer research. The majority of successful applications of genomic knowledge in today's clinical medicine involves a wealth of knowledge which has been gathered by a broad range of research and decades of work. Biological insights are required to inform drug discovery and to set a clear clinical path for development. Historically, acquisition of such knowledge through functional and mechanistic studies has been uncoordinated, random, and inefficient. The process of moving from cancer genomic discoveries to personalised medicine involves some major scientific, logistical and regulatory hurdles. This includes patient consent, sample acquisition, clinical annotation and study design, all of which can lead to data generation and computational analyses. Additionally, functional and mechanistic studies remain a challenge, which can lead to drug and biomarker discovery and development, commercial challenges and genomics-informed clinical trials. Importantly, these key scientific challenges are interdependent with each other. Directed and streamlined approaches are sought to be developed for a rapid generation of biological discoveries, which can allow for cancer genomic discoveries to translate to the clinic. Delivering personalised cancer medicine benefits from traditional, unconstrained and non-directed academic exploration, with the goal of directing scientific inquiry to convert genomic discovery to diagnostic and therapeutic targets. Proteomics Another example of discovery science is proteomics, a technology-driven and technology limited discovery science. Technologies for proteomic analysis provide information that is useful in discovery science. Proteome analysis as a discovery science is applicable in biotechnology, e.g., it assists in 1) the discovery of biochemical pathways which can identify targets for therapies, 2) developing new processes for manufacturing biological materials, 3) monitoring manufacturing processes for the purpose of quality control, and 4) developing diagnostic tests and efficacious treatment strategies for clinical diseases. In the context of proteomics, current life-science research remains technology-limited, however, recent available tools have assisted in evolving such research from being hypothesis-driven to discovery-driven. Hydrology Field hydrology has experienced a decline in progress due to a change from discovery-based field work to the gathering of data for modal parameterisation. In field hydrology, models are not any more useful than an understanding of how systems work, and discovery science allows for this understanding. Several important examples of field-based inquiry and discovery have taken place in field hydrology. These include: identifying spatial patterns of soil moisture and how they relate to topography; interrogating such data through the use of geostatistics; and discovering the importance of macropore flow and hydrological connectivity. Some discovery-based questions that have been asked in field hydrology include 1) determining which parts of the watershed are most important in determining water delivery to the channel, 2) how the presence of 'old' water can be explained by groundwater travelling into the stream, and 3) how there can be an explanation for flashy hydrographs when there is no overland flow visible. Therefore, there is a need for discovery science in field hydrology, despite any unusual hydrological hypotheses that are formed. Psychology An example of discovery science being enhanced for human brain function can be seen in the 1000 Functional Connectomes Project (FCP). This project was launched in 2009 as a way of generating and collecting functional magnetic resonance imaging (fMRI) data from over 1,000 individuals. Similarly to decoding the human genome, the mapping of human brain function presents challenges to the functional neuroimaging community. For the first phase of discovery science, it is necessary to accumulate and share large-scale datasets for data mining. Traditionally, the neuroimaging community within psychology has focused on task-based and hypothesis-driven approaches, however, a powerful tool for discovery science has emerged in the form of resting-state functional MRI (R-fMRI). The potential of discovery science remains vast, e.g. 1) helping with decision-making and guiding clinical diagnoses by developing objective measures of brain functional integrity, 2) assessing the level of efficacy of treatment interventions, and 3) tracking responses to treatment. Among the scientific community, recruiting participation and achieving collaboration from the broad population is essential for successfully implementing discovery-based science in the context of human brain function. Methodology Discovery-based methodologies are often viewed in contrast to traditional scientific practice, where hypotheses are formed before close examination of experimental data. However, from a philosophical perspective where all or most of the observable "low-hanging fruit" has already been plucked, examining the phenomenological world more closely than the senses alone (even augmented senses, e.g. via microscopes, telescopes, bifocals etc.) opens a new source of knowledge for hypothesis formation. This process is also known as inductive reasoning or the use of specific observations to make generalisations. Discovery science is usually a complex process, and consequently does not follow a simple linear cause and effect pattern. This means that outcomes are uncertain, and it is expected to have disappointing results as a fundamental part of discovery science. In particular, this may apply to medicine for the critically ill, where disease syndromes may be complex and multi-factorial. In psychiatry, studying complex relationships between brain and behaviour requires a large-scale science. This calls for a need to conceptually switch from hypothesis-driven studies to hypothesis-generating research which is discovery-based. Normally, discovery-based approaches for research are initially hypothesis-free, however, hypothesis testing can be elevated to a new level that effectively supports traditional hypothesis-driven studies. Researchers hope that combining integrative analyses of data from a range of different levels can result in new classification approaches to enable personalised interventions. Some biologists, such as Leroy Hood, have suggested that the model of ‘discovery science’ is a model which certain research fields are heading towards. For example, it is believed that more information about gene function can be discovered, through the evolution of data-mining tools. Discovery-based approaches are often referred to as “big data” approaches, because of the large-scale datasets that they involve analyses of. Big data includes large-scale homogenous study designs and highly variant datasets, and can be further divided into different kinds of datasets. For example, in neuropsychiatric studies, big data can be categorised as ‘broad’ or ‘deep’ data. Broad data is complex and heterogenous, as it is collected from multiple sources (e.g., labs and institutions) and uses different kinds of standards. On the other hand, deep data is collected at multiple levels, e.g., from genes to molecules, cells, circuits, behaviours, and symptoms. Broad data allows for population level inferences to be made; deep data is required for personalised medicine. However, combining broad and deep data and storing them in large-scale databases makes it practically impossible to rely on traditional statistical approaches. Instead, the use of discovery-based big data approaches can allow for the generation of hypotheses and offer an analytical tool with high-throughput for pattern recognition and data mining. It is in this way that discovery-based approaches can provide insight into causes and mechanisms of the area of study. Although discovery-based and data-driven big data approaches can inform understanding of mechanisms behind the topic of concern, the success of these approaches depends on integrated analyses of the various types of relevant data, and the resultant insight provided. For example, when researching psychiatric dysfunction, it is important to integrate vast and complex data such as brain imaging, genomic data and behavioural data, to uncover any brain-behaviour connections that are relevant to psychiatric dysfunction. Therefore, there are challenges to integrating data and developing mining tools. Furthermore, validation of results is a big challenge for discovery-based science. Although it is possible for results to be statistically validated by independent datasets, tests of functionality affect ultimate validation. Collaborative efforts are therefore critical for success. References Scientific method
0.793203
0.973425
0.772124
Substitution reaction
A substitution reaction (also known as single displacement reaction or single substitution reaction) is a chemical reaction during which one functional group in a chemical compound is replaced by another functional group. Substitution reactions are of prime importance in organic chemistry. Substitution reactions in organic chemistry are classified either as electrophilic or nucleophilic depending upon the reagent involved, whether a reactive intermediate involved in the reaction is a carbocation, a carbanion or a free radical, and whether the substrate is aliphatic or aromatic. Detailed understanding of a reaction type helps to predict the product outcome in a reaction. It also is helpful for optimizing a reaction with regard to variables such as temperature and choice of solvent. A good example of a substitution reaction is halogenation. When chlorine gas (Cl2) is irradiated, some of the molecules are split into two chlorine radicals (Cl•), whose free electrons are strongly nucleophilic. One of them breaks a C–H covalent bond in CH4 and grabs the hydrogen atom to form the electrically neutral HCl. The other radical reforms a covalent bond with the CH3• to form CH3Cl (methyl chloride). Nucleophilic substitution In organic (and inorganic) chemistry, nucleophilic substitution is a fundamental class of reactions in which a nucleophile selectively bonds with or attacks the positive or partially positive charge on an atom or a group of atoms. As it does so, it replaces a weaker nucleophile, which then becomes a leaving group; the remaining positive or partially positive atom becomes an electrophile. The whole molecular entity of which the electrophile and the leaving group are part is usually called the substrate. The most general form for the reaction may be given as Nuc\mathbf{:}- + R-LG -> R-Nuc{} + LG\mathbf{:}- where indicates the substrate. The electron pair (:) from the nucleophile (Nuc:) attacks the substrate, forming a new covalent bond . The prior state of charge is restored when the leaving group (LG) departs with an electron pair. The principal product in this case is . In such reactions, the nucleophile is usually electrically neutral or negatively charged, whereas the substrate is typically neutral or positively charged. An example of nucleophilic substitution is the hydrolysis of an alkyl bromide, , under basic conditions, where the attacking nucleophile is the base and the leaving group is : R-Br + OH- -> R-OH + Br- Nucleophilic substitution reactions are commonplace in organic chemistry, and they can be broadly categorized as taking place at a carbon of a saturated aliphatic compound carbon or (less often) at an aromatic or other unsaturated carbon center. Mechanisms Nucleophilic substitutions can proceed by two different mechanisms, unimolecular nucleophilic substitution (SN1) and bimolecular nucleophilic substitution (SN2). The two reactions are named according tho their rate law, with SN1 having a first-order rate law, and SN2 having a second-order. The SN1 mechanism has two steps. In the first step, the leaving group departs, forming a carbocation (C+). In the second step, the nucleophilic reagent (Nuc:) attaches to the carbocation and forms a covalent sigma bond. If the substrate has a chiral carbon, this mechanism can result in either inversion of the stereochemistry or retention of configuration. Usually, both occur without preference. The result is racemization. The stability of a carbocation (C+) depends on how many other carbon atoms are bonded to it. This results in SN1 reactions usually occurring on atoms with at least two carbons bonded to them. A more detailed explanation of this can be found in the main SN1 reaction page. The SN2 mechanism has just one step. The attack of the reagent and the expulsion of the leaving group happen simultaneously. This mechanism always results in inversion of configuration. If the substrate that is under nucleophilic attack is chiral, the reaction will therefore lead to an inversion of its stereochemistry, called a Walden inversion. SN2 attack may occur if the backside route of attack is not sterically hindered by substituents on the substrate. Therefore, this mechanism usually occurs at an unhindered primary carbon center. If there is steric crowding on the substrate near the leaving group, such as at a tertiary carbon center, the substitution will involve an SN1 rather than an SN2. Other types of nucleophilic substitution include, nucleophilic acyl substitution, and nucleophilic aromatic substitution. Acyl substitution occurs when a nucleophile attacks a carbon that is doubly bonded to one oxygen and singly bonded to another oxygen (can be N or S or a halogen), called an acyl group. The nucleophile attacks the carbon causing the double bond to break into a single bond. The double can then reform, kicking off the leaving group in the process. Aromatic substitution occurs on compounds with systems of double bonds connected in rings. See aromatic compounds for more. Electrophilic substitution Electrophiles are involved in electrophilic substitution reactions, particularly in electrophilic aromatic substitutions. In this example, the benzene ring's electron resonance structure is attacked by an electrophile E+. The resonating bond is broken and a carbocation resonating structure results. Finally a proton is kicked out and a new aromatic compound is formed. Electrophilic reactions to other unsaturated compounds than arenes generally lead to electrophilic addition rather than substitution. Radical substitution A radical substitution reaction involves radicals. An example is the Hunsdiecker reaction. Organometallic substitution Coupling reactions are a class of metal-catalyzed reactions involving an organometallic compound RM and an organic halide R′X that together react to form a compound of the type R-R′ with formation of a new carbon–carbon bond. Examples include the Heck reaction, Ullmann reaction, and Wurtz–Fittig reaction. Many variations exist. Substituted compounds Substituted compounds are compounds where one or more hydrogen atoms have been replaced with something else such as an alkyl, hydroxy, or halogen. More can be found on the substituted compounds page. Inorganic and organometallic chemistry While it is common to discuss substitution reactions in the context of organic chemistry, the reaction is generic and applies to a wide range of compounds. Ligands in coordination complexes are susceptible to substitution. Both associative and dissociative mechanisms have been observed. Associative substitution, for example, is typically applied to organometallic and coordination complexes, but resembles the Sn2 mechanism in organic chemistry. The opposite pathway is dissociative substitution, being analogous to the Sn1 pathway. Examples of associative mechanisms are commonly found in the chemistry of 16e square planar metal complexes, e.g. Vaska's complex and tetrachloroplatinate. The rate law is governed by the Eigen–Wilkins Mechanism. Dissociative substitution resembles the SN1 mechanism in organic chemistry. This pathway can be well described by the cis effect, or the labilization of CO ligands in the cis position. Complexes that undergo dissociative substitution are often coordinatively saturated and often have octahedral molecular geometry. The entropy of activation is characteristically positive for these reactions, which indicates that the disorder of the reacting system increases in the rate-determining step. Dissociative pathways are characterized by a rate determining step that involves release of a ligand from the coordination sphere of the metal undergoing substitution. The concentration of the substituting nucleophile has no influence on this rate, and an intermediate of reduced coordination number can be detected. The reaction can be described with k1, k−1 and k2, which are the rate constants of their corresponding intermediate reaction steps: L_\mathit{n}M-L <=>[-\mathrm L, k_1][+\mathrm L, k_{-1}] L_\mathit{n}M-\Box ->[+\mathrm L', k_2] L_\mathit{n}M-L' Normally the rate determining step is the dissociation of L from the complex, and [L'] does not affect the rate of reaction, leading to the simple rate equation: Rate = {\mathit k_1 [L_\mathit{n}M-L]} Further reading Imyanitov, Naum S. (1993). "Is This Reaction a Substitution, Oxidation-Reduction, or Transfer?". J. Chem. Educ. 70 (1): 14–16. Bibcode:1993JChEd..70...14I. doi:10.1021/ed070p14. References
0.781475
0.988014
0.772109
Metonymy
Metonymy is a figure of speech in which a concept is referred to by the name of something closely associated with that thing or concept. Etymology The words metonymy and metonym come ; , a suffix that names figures of speech, . Background Metonymy and related figures of speech are common in everyday speech and writing. Synecdoche and metalepsis are considered specific types of metonymy. Polysemy, the capacity for a word or phrase to have multiple meanings, sometimes results from relations of metonymy. Both metonymy and metaphor involve the substitution of one term for another. In metaphor, this substitution is based on some specific analogy between two things, whereas in metonymy the substitution is based on some understood association or contiguity. American literary theorist Kenneth Burke considers metonymy as one of four "master tropes": metaphor, metonymy, synecdoche, and irony. He discusses them in particular ways in his book A Grammar of Motives. Whereas Roman Jakobson argued that the fundamental dichotomy in trope was between metaphor and metonymy, Burke argues that the fundamental dichotomy is between irony and synecdoche, which he also describes as the dichotomy between dialectic and representation, or again between reduction and perspective. In addition to its use in everyday speech, metonymy is a figure of speech in some poetry and in much rhetoric. Greek and Latin scholars of rhetoric made significant contributions to the study of metonymy. Meaning relationships Metonymy takes many different forms. Synecdoche uses a part to refer to the whole, or the whole to refer to the part. Metalepsis uses a familiar word or a phrase in a new context. For example, "lead foot" may describe a fast driver; lead is proverbially heavy, and a foot exerting more pressure on the accelerator causes a vehicle to go faster (in this context unduly so). The figure of speech is a "metonymy of a metonymy". Many cases of polysemy originate as metonyms: for example, "chicken" means the meat as well as the animal; "crown" for the object, as well as the institution. Versus metaphor Metonymy works by the contiguity (association) between two concepts, whereas the term "metaphor" is based upon their analogous similarity. When people use metonymy, they do not typically wish to transfer qualities from one referent to another as they do with metaphor. There is nothing press-like about reporters or crown-like about a monarch, but "the press" and "the crown" are both common metonyms. Some uses of figurative language may be understood as both metonymy and metaphor; for example, the relationship between "a crown" and a "king" could be interpreted metaphorically (i.e., the king, like his gold crown, could be seemingly stiff yet ultimately malleable, over-ornate, and consistently immobile). In the phrase "lands belonging to the crown", the word "crown" is a metonymy. The reason is that monarchs by and large indeed wear a crown, physically. In other words, there is a pre-existent link between "crown" and "monarchy". On the other hand, when Ghil'ad Zuckermann argues that the Israeli language is a "phoenicuckoo cross with some magpie characteristics", he is using metaphors. There is no physical link between a language and a bird. The reason the metaphors "phoenix" and "cuckoo" are used is that on the one hand hybridic "Israeli" is based on Hebrew, which, like a phoenix, rises from the ashes; and on the other hand, hybridic "Israeli" is based on Yiddish, which like a cuckoo, lays its egg in the nest of another bird, tricking it to believe that it is its own egg. Furthermore, the metaphor "magpie" is employed because, according to Zuckermann, hybridic "Israeli" displays the characteristics of a magpie, "stealing" from languages such as Arabic and English. Two examples using the term "fishing" help clarify the distinction. The phrase "to fish pearls" uses metonymy, drawing from "fishing" the idea of taking things from the ocean. What is carried across from "fishing fish" to "fishing pearls" is the domain of metonymy. In contrast, the metaphorical phrase "fishing for information" transfers the concept of fishing into a new domain. If someone is "fishing" for information, we do not imagine that the person is anywhere near the ocean; rather, we transpose elements of the action of fishing (waiting, hoping to catch something that cannot be seen, probing, and most importantly, trying) into a new domain (a conversation). Thus, metaphors work by presenting a target set of meanings and using them to suggest a similarity between items, actions, or events in two domains, whereas metonymy calls up or references a specific domain (here, removing items from the sea). Sometimes, metaphor and metonymy may both be at work in the same figure of speech, or one could interpret a phrase metaphorically or metonymically. For example, the phrase "lend me your ear" could be analyzed in a number of ways. One could imagine the following interpretations: Analyze "ear" metonymically first – "ear" means "attention" (because people use ears to pay attention to each other's speech). Now, when we hear the phrase "Talk to him; you have his ear", it symbolizes he will listen to you or that he will pay attention to you. Another phrase "lending an ear (attention)", we stretch the base meaning of "lend" (to let someone borrow an object) to include the "lending" of non-material things (attention), but, beyond this slight extension of the verb, no metaphor is at work. Imagine the whole phrase literally – imagine that the speaker literally borrows the listener's ear as a physical object (and the person's head with it). Then the speaker has temporary possession of the listener's ear, so the listener has granted the speaker temporary control over what the listener hears. The phrase "lend me your ear" is interpreted to metaphorically mean that the speaker wants the listener to grant the speaker temporary control over what the listener hears. First, analyze the verb phrase "lend me your ear" metaphorically to mean "turn your ear in my direction", since it is known that, literally lending a body part is nonsensical. Then, analyze the motion of ears metonymically – we associate "turning ears" with "paying attention", which is what the speaker wants the listeners to do. It is difficult to say which analysis above most closely represents the way a listener interprets the expression, and it is possible that different listeners analyse the phrase in different ways, or even in different ways at different times. Regardless, all three analyses yield the same interpretation. Thus, metaphor and metonymy, though different in their mechanism, work together seamlessly. Examples Here are some broad kinds of relationships where metonymy is frequently used: Containment: When one thing contains another, it can frequently be used metonymically, as when "dish" is used to refer not to a plate but to the food it contains, or as when the name of a building is used to refer to the entity it contains, as when "the White House" or "the Pentagon" are used to refer to the Administration of the United States, or the U.S. Department of Defense, respectively. A physical item, place, or body part used to refer to a related concept, such as "the bench" for the judicial profession, "stomach" or "belly" for appetite or hunger, "mouth" for speech, being "in diapers" for infancy, "palate" for taste, "the altar" or "the aisle" for marriage, "hand" for someone's responsibility for something ("he had a hand in it"), "head" or "brain" for mind or intelligence, or "nose" for concern about someone else's affairs, (as in "keep your nose out of my business"). A reference to Timbuktu, as in "from here to Timbuktu," usually means a place or idea is too far away or mysterious. Metonymy of objects or body parts for concepts is common in dreams. Tools/instruments: Often a tool is used to signify the job it does or the person who does the job, as in the phrase "his Rolodex is long and valuable" (referring to the Rolodex instrument, which keeps contact business cards, meaning he has a lot of contacts and knows many people). Also "the press" (referring to the printing press), or as in the proverb, "The pen is mightier than the sword." Product for process: This is a type of metonymy where the product of the activity stands for the activity itself. For example, in "The book is moving right along," the book refers to the process of writing or publishing. Punctuation marks often stand metonymically for a meaning expressed by the punctuation mark. For example, "He's a big question mark to me" indicates that something is unknown. In the same way, 'period' can be used to emphasise that a point is concluded or not to be challenged. Synecdoche: A part of something is often used for the whole, as when people refer to "head" of cattle or assistants are referred to as "hands." An example of this is the Canadian dollar, referred to as the loonie for the image of a bird on the one-dollar coin. United States one hundred-dollar bills are often referred to as "Bens", "Benjamins" or "Franklins" because they bear a portrait of Benjamin Franklin. Also, the whole of something is used for a part, as when people refer to a municipal employee as "the city" or police officers as "the law". Toponyms: A country's capital city or some location within the city is frequently used as a metonym for the country's government, such as Washington, D.C., in the United States; Ottawa in Canada; Rome in Italy; Paris in France; Tokyo in Japan; New Delhi in India; London in the United Kingdom; Moscow in Russia etc. Perhaps the oldest such example is "Pharaoh" which originally referred to the residence of the King of Egypt but by the New Kingdom had come to refer to the king himself. Similarly, other important places, such as Wall Street, K Street, Madison Avenue, Silicon Valley, Hollywood, Vegas, and Detroit are commonly used to refer to the industries that are located there (finance, lobbying, advertising, high technology, entertainment, gambling, and motor vehicles, respectively). Such usage may persist even when the industries in question have moved elsewhere, for example, Fleet Street continues to be used as a metonymy for the British national press, though many national publications are no longer headquartered on the street of that name. Places and institutions A place is often used as a metonym for a government or other official institutions, for example, Brussels for the institutions of the European Union, The Hague for the International Court of Justice or International Criminal Court, Nairobi for the government of Kenya, the Kremlin for the Russian presidency, Chausseestraße and Pullach for the German Federal Intelligence Service, Number 10, Downing Street or Whitehall for the prime minister of the United Kingdom and the UK civil service, the White House and Capitol Hill for the executive and legislative branches, respectively, of the United States federal government, Foggy Bottom for the U.S. State Department, Langley for the Central Intelligence Agency, Quantico for either the Federal Bureau of Investigation academy and forensic laboratory or the Marine Corps base of the same name, Malacañang for the President of the Philippines, their advisers and Office of the President, "La Moncloa" for the Prime Minister of Spain, and Vatican for the pope, Holy See and Roman Curia. Other names of addresses or locations can become convenient shorthand names in international diplomacy, allowing commentators and insiders to refer impersonally and succinctly to foreign ministries with impressive and imposing names as (for example) the Quai d'Orsay, the Wilhelmstrasse, the Kremlin, and the Porte. A place (or places) can represent an entire industry. For instance: Wall Street, used metonymically, can stand for the entire U.S. financial and corporate banking sector; K Street for Washington, D.C.'s lobbying industry or lobbying in the United States in general; Hollywood for the U.S. film industry, and the people associated with it; Broadway for the American commercial theatrical industry; Madison Avenue for the American advertising industry; and Silicon Valley for the American technology industry. The High Street (of which there are over 5,000 in Britain) is a term commonly used to refer to the entire British retail sector. Common nouns and phrases can also be metonyms: "red tape" can stand for bureaucracy, whether or not that bureaucracy uses actual red tape to bind documents. In Commonwealth realms, The Crown is a metonym for the state in all its aspects. In recent Israeli usage, the term "Balfour" came to refer to the Israeli Prime Minister's residence, located on Balfour Street in Jerusalem, to all the streets around it where demonstrations frequently take place, and also to the Prime Minister and his family who live in the residence. Rhetoric in ancient history Western culture studied poetic language and deemed it to be rhetoric. A. Al-Sharafi supports this concept in his book Textual Metonymy, "Greek rhetorical scholarship at one time became entirely poetic scholarship." Philosophers and rhetoricians thought that metaphors were the primary figurative language used in rhetoric. Metaphors served as a better means to attract the audience's attention because the audience had to read between the lines in order to get an understanding of what the speaker was trying to say. Others did not think of metonymy as a good rhetorical method because metonymy did not involve symbolism. Al-Sharafi explains, "This is why they undermined practical and purely referential discourse because it was seen as banal and not containing anything new, strange or shocking." Greek scholars contributed to the definition of metonymy. For example, Isocrates worked to define the difference between poetic language and non-poetic language by saying that, "Prose writers are handicapped in this regard because their discourse has to conform to the forms and terms used by the citizens and to those arguments which are precise and relevant to the subject-matter." In other words, Isocrates proposes here that metaphor is a distinctive feature of poetic language because it conveys the experience of the world afresh and provides a kind of defamiliarisation in the way the citizens perceive the world. Democritus described metonymy by saying, "Metonymy, that is the fact that words and meaning change." Aristotle discussed different definitions of metaphor, regarding one type as what we know to be metonymy today. Latin scholars also had an influence on metonymy. The treatise Rhetorica ad Herennium states metonymy as, "the figure which draws from an object closely akin or associated an expression suggesting the object meant, but not called by its own name." The author describes the process of metonymy to us saying that we first figure out what a word means. We then figure out that word's relationship with other words. We understand and then call the word by a name that it is associated with. "Perceived as such then metonymy will be a figure of speech in which there is a process of abstracting a relation of proximity between two words to the extent that one will be used in place of another." Cicero viewed metonymy as more of a stylish rhetorical method and described it as being based on words, but motivated by style. Jakobson, structuralism and realism Metonymy became important in French structuralism through the work of Roman Jakobson. In his 1956 essay "The Metaphoric and Metonymic Poles", Jakobson relates metonymy to the linguistic practice of [syntagmatic] combination and to the literary practice of realism. He explains: The primacy of the metaphoric process in the literary schools of Romanticism and symbolism has been repeatedly acknowledged, but it is still insufficiently realized that it is the predominance of metonymy which underlies and actually predetermines the so-called 'realistic' trend, which belongs to an intermediary stage between the decline of Romanticism and the rise of symbolism and is opposed to both. Following the path of contiguous relationships, the realistic author metonymically digresses from the plot to the atmosphere and from the characters to the setting in space and time. He is fond of synecdochic details. In the scene of Anna Karenina's suicide Tolstoy's artistic attention is focused on the heroine's handbag; and in War and Peace the synecdoches "hair on the upper lip" or "bare shoulders" are used by the same writer to stand for the female characters to whom these features belong. Jakobson's theories were important for Claude Lévi-Strauss, Roland Barthes, Jacques Lacan, and others. Dreams can use metonyms. Art Metonyms can also be wordless. For example, Roman Jakobson argued that cubist art relied heavily on nonlinguistic metonyms, while surrealist art relied more on metaphors. Lakoff and Turner argued that all words are metonyms: "Words stand for the concepts they express." Some artists have used actual words as metonyms in their paintings. For example, Miró's 1925 painting "Photo: This is the Color of My Dreams" has the word "photo" to represent the image of his dreams. This painting comes from a series of paintings called peintures-poésies (paintings-poems) which reflect Miró's interest in dreams and the subconscious and the relationship of words, images, and thoughts. Picasso, in his 1911 painting "Pipe Rack and Still Life on Table" inserts the word "Ocean" rather than painting an ocean: These paintings by Miró and Picasso are, in a sense, the reverse of a rebus: the word stands for the picture, instead of the picture standing for the word. See also -onym Antonomasia Deferred reference Eggcorn Eponym Enthymeme Euphemism by comparison Generic trademark Kenning List of metonyms Meronymy Newspeak Pars pro toto Simile Slang Sobriquet Social stereotype Synecdoche Totum pro parte References Citations Sources Further reading Figures of speech Narrative techniques Semantics Tropes by type
0.772944
0.998888
0.772084
Collision theory
Collision theory is a principle of chemistry used to predict the rates of chemical reactions. It states that when suitable particles of the reactant hit each other with the correct orientation, only a certain amount of collisions result in a perceptible or notable change; these successful changes are called successful collisions. The successful collisions must have enough energy, also known as activation energy, at the moment of impact to break the pre-existing bonds and form all new bonds. This results in the products of the reaction. The activation energy is often predicted using the Transition state theory. Increasing the concentration of the reactant brings about more collisions and hence more successful collisions. Increasing the temperature increases the average kinetic energy of the molecules in a solution, increasing the number of collisions that have enough energy. Collision theory was proposed independently by Max Trautz in 1916 and William Lewis in 1918. When a catalyst is involved in the collision between the reactant molecules, less energy is required for the chemical change to take place, and hence more collisions have sufficient energy for the reaction to occur. The reaction rate therefore increases. Collision theory is closely related to chemical kinetics. Collision theory was initially developed for the gas reaction system with no dilution. But most reactions involve solutions, for example, gas reactions in a carrying inert gas, and almost all reactions in solutions. The collision frequency of the solute molecules in these solutions is now controlled by diffusion or Brownian motion of individual molecules. The flux of the diffusive molecules follows Fick's laws of diffusion. For particles in a solution, an example model to calculate the collision frequency and associated coagulation rate is the Smoluchowski coagulation equation proposed by Marian Smoluchowski in a seminal 1916 publication. In this model, Fick's flux at the infinite time limit is used to mimic the particle speed of the collision theory. Jixin Chen proposed a finite-time solution to the diffusion flux in 2022 which significantly changes the estimated collision frequency of two particles in a solution. Rate equations The rate for a bimolecular gas-phase reaction, A + B → product, predicted by collision theory is where: k is the rate constant in units of (number of molecules)−1⋅s−1⋅m3. nA is the number density of A in the gas in units of m−3. nB is the number density of B in the gas in units of m−3. E.g. for a gas mixture with gas A concentration 0.1 mol⋅L−1 and B concentration 0.2 mol⋅L−1, the number of density of A is 0.1×6.02×1023÷10−3 = 6.02×1025 m−3, the number of density of B is 0.2×6.02×1023÷10−3 = 1.2×1026 m−3 Z is the collision frequency in units of m−3⋅s−1. is the steric factor. Ea is the activation energy of the reaction, in units of J⋅mol−1. T is the temperature in units of K. R is the gas constant in units of J mol−1K−1. The unit of r(T) can be converted to mol⋅L−1⋅s−1, after divided by (1000×NA), where NA is the Avogadro constant. For a reaction between A and B, the collision frequency calculated with the hard-sphere model with the unit number of collisions per m3 per second is: where: nA is the number density of A in the gas in units of m−3. nB is the number density of B in the gas in units of m−3. E.g. for a gas mixture with gas A concentration 0.1 mol⋅L−1 and B concentration 0.2 mol⋅L−1, the number of density of A is 0.1×6.02×1023÷10−3 = 6.02×1025 m−3, the number of density of B is 0.2×6.02×1023÷10−3 = 1.2×1026 m−3. σAB is the reaction cross section (unit m2), the area when two molecules collide with each other, simplified to , where rA the radius of A and rB the radius of B in unit m. kB is the Boltzmann constant unit J⋅K−1. T is the absolute temperature (unit K). μAB is the reduced mass of the reactants A and B, (unit kg). NA is the Avogadro constant. [A] is molar concentration of A in unit mol⋅L−1. [B] is molar concentration of B in unit mol⋅L−1. Z can be converted to mole collision per liter per second dividing by 1000NA. If all the units that are related to dimension are converted to dm, i.e. mol⋅dm−3 for [A] and [B], dm2 for σAB, dm2⋅kg⋅s−2⋅K−1 for the Boltzmann constant, then unit mol⋅dm−3⋅s−1. Quantitative insights Derivation Consider the bimolecular elementary reaction: A + B → C In collision theory it is considered that two particles A and B will collide if their nuclei get closer than a certain distance. The area around a molecule A in which it can collide with an approaching B molecule is called the cross section (σAB) of the reaction and is, in simplified terms, the area corresponding to a circle whose radius is the sum of the radii of both reacting molecules, which are supposed to be spherical. A moving molecule will therefore sweep a volume per second as it moves, where is the average velocity of the particle. (This solely represents the classical notion of a collision of solid balls. As molecules are quantum-mechanical many-particle systems of electrons and nuclei based upon the Coulomb and exchange interactions, generally they neither obey rotational symmetry nor do they have a box potential. Therefore, more generally the cross section is defined as the reaction probability of a ray of A particles per areal density of B targets, which makes the definition independent from the nature of the interaction between A and B. Consequently, the radius is related to the length scale of their interaction potential.) From kinetic theory it is known that a molecule of A has an average velocity (different from root mean square velocity) of , where is the Boltzmann constant, and is the mass of the molecule. The solution of the two-body problem states that two different moving bodies can be treated as one body which has the reduced mass of both and moves with the velocity of the center of mass, so, in this system must be used instead of . Thus, for a given molecule A, it travels before hitting a molecule B if all B is fixed with no movement, where is the average traveling distance. Since B also moves, the relative velocity can be calculated using the reduced mass of A and B. Therefore, the total collision frequency, of all A molecules, with all B molecules, is From Maxwell–Boltzmann distribution it can be deduced that the fraction of collisions with more energy than the activation energy is . Therefore, the rate of a bimolecular reaction for ideal gases will be in unit number of molecular reactions s−1⋅m−3, where: Z is the collision frequency with unit s−1⋅m−3. The z is Z without [A][B]. is the steric factor, which will be discussed in detail in the next section, Ea is the activation energy (per mole) of the reaction in unit J/mol, T is the absolute temperature in unit K, R is the gas constant in unit J/mol/K. [A] is molar concentration of A in unit mol/L, [B] is molar concentration of B in unit mol/L. The product zρ is equivalent to the preexponential factor of the Arrhenius equation. Validity of the theory and steric factor Once a theory is formulated, its validity must be tested, that is, compare its predictions with the results of the experiments. When the expression form of the rate constant is compared with the rate equation for an elementary bimolecular reaction, , it is noticed that unit M−1⋅s−1 (= dm3⋅mol−1⋅s−1), with all dimension unit dm including kB. This expression is similar to the Arrhenius equation and gives the first theoretical explanation for the Arrhenius equation on a molecular basis. The weak temperature dependence of the preexponential factor is so small compared to the exponential factor that it cannot be measured experimentally, that is, "it is not feasible to establish, on the basis of temperature studies of the rate constant, whether the predicted T dependence of the preexponential factor is observed experimentally". Steric factor If the values of the predicted rate constants are compared with the values of known rate constants, it is noticed that collision theory fails to estimate the constants correctly, and the more complex the molecules are, the more it fails. The reason for this is that particles have been supposed to be spherical and able to react in all directions, which is not true, as the orientation of the collisions is not always proper for the reaction. For example, in the hydrogenation reaction of ethylene the H2 molecule must approach the bonding zone between the atoms, and only a few of all the possible collisions fulfill this requirement. To alleviate this problem, a new concept must be introduced: the steric factor ρ. It is defined as the ratio between the experimental value and the predicted one (or the ratio between the frequency factor and the collision frequency): and it is most often less than unity. Usually, the more complex the reactant molecules, the lower the steric factor. Nevertheless, some reactions exhibit steric factors greater than unity: the harpoon reactions, which involve atoms that exchange electrons, producing ions. The deviation from unity can have different causes: the molecules are not spherical, so different geometries are possible; not all the kinetic energy is delivered into the right spot; the presence of a solvent (when applied to solutions), etc. {| class="wikitable" |+ Experimental rate constants compared to the ones predicted by collision theory for gas phase reactions |- ! Reaction ! A, s−1M−1 ! Z, s−1M−1 ! Steric factor |- | 2ClNO → 2Cl + 2NO || 9.4 || 5.9 || 0.16 |- | 2ClO → Cl2 + O2 || 6.3 || 2.5 || 2.3 |- | H2 + C2H4 → C2H6 || 1.24 || 7.3 || 1.7 |- | Br2 + K → KBr + Br || 1.0 || 2.1 || 4.3 |- |} Collision theory can be applied to reactions in solution; in that case, the solvent cage has an effect on the reactant molecules, and several collisions can take place in a single encounter, which leads to predicted preexponential factors being too large. ρ values greater than unity can be attributed to favorable entropic contributions. {| class="wikitable" |+ Experimental rate constants compared to the ones predicted by collision theory for reactions in solution |- ! Reaction ! Solvent ! A, 1011 s−1⋅M−1 ! Z, 1011 s−1⋅M−1 ! Steric factor |- | C2H5Br + OH− || ethanol || 4.30 || 3.86 || 1.11 |- | C2H5O− + CH3I || ethanol ||2.42 || 1.93 || 1.25 |- | ClCH2CO2− + OH− || water || 4.55 || 2.86 || 1.59 |- | C3H6Br2 + I− || methanol || 1.07 || 1.39 || 0.77 |- | HOCH2CH2Cl + OH− || water ||25.5 || 2.78 || 9.17 |- | 4-CH3C6H4O− + CH3I || ethanol || 8.49 || 1.99 || 4.27 |- | CH3(CH2)2Cl + I− || acetone || 0.085 || 1.57|| 0.054 |- | C5H5N + CH3I || C2H2Cl4 || — || — || 2.0 10 |- |} Alternative collision models for diluted solutions Collision in diluted gas or liquid solution is regulated by diffusion instead of direct collisions, which can be calculated from Fick's laws of diffusion. Theoretical models to calculate the collision frequency in solutions have been proposed by Marian Smoluchowski in a seminal 1916 publication at the infinite time limit, and Jixin Chen in 2022 at a finite-time approximation. A scheme of comparing the rate equations in pure gas and solution is shown in the right figure. For a diluted solution in the gas or the liquid phase, the collision equation developed for neat gas is not suitable when diffusion takes control of the collision frequency, i.e., the direct collision between the two molecules no longer dominates. For any given molecule A, it has to collide with a lot of solvent molecules, let's say molecule C, before finding the B molecule to react with. Thus the probability of collision should be calculated using the Brownian motion model, which can be approximated to a diffusive flux using various boundary conditions that yield different equations in the Smoluchowski model and the JChen Model. For the diffusive collision, at the infinite time limit when the molecular flux can be calculated from the Fick's laws of diffusion, in 1916 Smoluchowski derived a collision frequency between molecule A and B in a diluted solution: where: is the collision frequency, unit #collisions/s in 1 m3 of solution. is the radius of the collision cross-section, unit m. is the relative diffusion constant between A and B, unit m2/s, and . and are the number concentrations of molecules A and B in the solution respectively, unit #molecule/m3. or where: is in unit mole collisions/s in 1 L of solution. is the Avogadro constant. is the radius of the collision cross-section, unit m. is the relative diffusion constant between A and B, unit m2/s. and are the molar concentrations of A and B respectively, unit mol/L. is the diffusive collision rate constant, unit L mol−1 s−1. There have been a lot of extensions and modifications to the Smoluchowski model since it was proposed in 1916. In 2022, Chen rationales that because the diffusive flux is evolving over time and the distance between the molecules has a finite value at a given concentration, there should be a critical time to cut off the evolution of the flux that will give a value much larger than the infinite solution Smoluchowski has proposed. So he proposes to use the average time for two molecules to switch places in the solution as the critical cut-off time, i.e., first neighbor visiting time. Although an alternative time could be the mean free path time or the average first passenger time, it overestimates the concentration gradient between the original location of the first passenger to the target. This hypothesis yields a fractal reaction kinetic rate equation of diffusive collision in a diluted solution: where: is in unit mole collisions/s in 1 L of solution. is the Avogadro constant. is the area of the collision cross-section in unit m2. is the product of the unitless fractions of reactive surface area on A and B. is the effective adsorption cross-section area. is the relative diffusion constant between A and B, unit m2/s, and . and are the molar concentrations of A and B respectively, unit mol/L. is the diffusive collision rate constant, unit L4/3 mol-4/3 s−1. See also Two-dimensional gas Rate equation References External links Introduction to Collision Theory Chemical kinetics
0.777388
0.993177
0.772084
Metastability
In chemistry and physics, metastability is an intermediate energetic state within a dynamical system other than the system's state of least energy. A ball resting in a hollow on a slope is a simple example of metastability. If the ball is only slightly pushed, it will settle back into its hollow, but a stronger push may start the ball rolling down the slope. Bowling pins show similar metastability by either merely wobbling for a moment or tipping over completely. A common example of metastability in science is isomerisation. Higher energy isomers are long lived because they are prevented from rearranging to their preferred ground state by (possibly large) barriers in the potential energy. During a metastable state of finite lifetime, all state-describing parameters reach and hold stationary values. In isolation: the state of least energy is the only one the system will inhabit for an indefinite length of time, until more external energy is added to the system (unique "absolutely stable" state); the system will spontaneously leave any other state (of higher energy) to eventually return (after a sequence of transitions) to the least energetic state. The metastability concept originated in the physics of first-order phase transitions. It then acquired new meaning in the study of aggregated subatomic particles (in atomic nuclei or in atoms) or in molecules, macromolecules or clusters of atoms and molecules. Later, it was borrowed for the study of decision-making and information transmission systems. Metastability is common in physics and chemistry – from an atom (many-body assembly) to statistical ensembles of molecules (viscous fluids, amorphous solids, liquid crystals, minerals, etc.) at molecular levels or as a whole (see Metastable states of matter and grain piles below). The abundance of states is more prevalent as the systems grow larger and/or if the forces of their mutual interaction are spatially less uniform or more diverse. In dynamic systems (with feedback) like electronic circuits, signal trafficking, decisional, neural and immune systems, the time-invariance of the active or reactive patterns with respect to the external influences defines stability and metastability (see brain metastability below). In these systems, the equivalent of thermal fluctuations in molecular systems is the "white noise" that affects signal propagation and the decision-making. Statistical physics and thermodynamics Non-equilibrium thermodynamics is a branch of physics that studies the dynamics of statistical ensembles of molecules via unstable states. Being "stuck" in a thermodynamic trough without being at the lowest energy state is known as having kinetic stability or being kinetically persistent. The particular motion or kinetics of the atoms involved has resulted in getting stuck, despite there being preferable (lower-energy) alternatives. States of matter Metastable states of matter (also referred as metastates) range from melting solids (or freezing liquids), boiling liquids (or condensing gases) and sublimating solids to supercooled liquids or superheated liquid-gas mixtures. Extremely pure, supercooled water stays liquid below 0 °C and remains so until applied vibrations or condensing seed doping initiates crystallization centers. This is a common situation for the droplets of atmospheric clouds. Condensed matter and macromolecules Metastable phases are common in condensed matter and crystallography. This is the case for anatase, a metastable polymorph of titanium dioxide, which despite commonly being the first phase to form in many synthesis processes due to its lower surface energy, is always metastable, with rutile being the most stable phase at all temperatures and pressures. As another example, diamond is a stable phase only at very high pressures, but is a metastable form of carbon at standard temperature and pressure. It can be converted to graphite (plus leftover kinetic energy), but only after overcoming an activation energy – an intervening hill. Martensite is a metastable phase used to control the hardness of most steel. Metastable polymorphs of silica are commonly observed. In some cases, such as in the allotropes of solid boron, acquiring a sample of the stable phase is difficult. The bonds between the building blocks of polymers such as DNA, RNA, and proteins are also metastable. Adenosine triphosphate (ATP) is a highly metastable molecule, colloquially described as being "full of energy" that can be used in many ways in biology. Generally speaking, emulsions/colloidal systems and glasses are metastable. The metastability of silica glass, for example, is characterised by lifetimes on the order of 1098 years (as compared with the lifetime of the universe, which is thought to be around years). Sandpiles are one system which can exhibit metastability if a steep slope or tunnel is present. Sand grains form a pile due to friction. It is possible for an entire large sand pile to reach a point where it is stable, but the addition of a single grain causes large parts of it to collapse. The avalanche is a well-known problem with large piles of snow and ice crystals on steep slopes. In dry conditions, snow slopes act similarly to sandpiles. An entire mountainside of snow can suddenly slide due to the presence of a skier, or even a loud noise or vibration. Quantum mechanics Aggregated systems of subatomic particles described by quantum mechanics (quarks inside nucleons, nucleons inside atomic nuclei, electrons inside atoms, molecules, or atomic clusters) are found to have many distinguishable states. Of these, one (or a small degenerate set) is indefinitely stable: the ground state or global minimum. All other states besides the ground state (or those degenerate with it) have higher energies. Of all these other states, the metastable states are the ones having lifetimes lasting at least 102 to 103 times longer than the shortest lived states of the set. A metastable state is then long-lived (locally stable with respect to configurations of 'neighbouring' energies) but not eternal (as the global minimum is). Being excited – of an energy above the ground state – it will eventually decay to a more stable state, releasing energy. Indeed, above absolute zero, all states of a system have a non-zero probability to decay; that is, to spontaneously fall into another state (usually lower in energy). One mechanism for this to happen is through tunnelling. Nuclear physics Some energetic states of an atomic nucleus (having distinct spatial mass, charge, spin, isospin distributions) are much longer-lived than others (nuclear isomers of the same isotope), e.g. technetium-99m. The isotope tantalum-180m, although being a metastable excited state, is long-lived enough that it has never been observed to decay, with a half-life calculated to be least years, over 3 million times the current age of the universe. Atomic and molecular physics Some atomic energy levels are metastable. Rydberg atoms are an example of metastable excited atomic states. Transitions from metastable excited levels are typically those forbidden by electric dipole selection rules. This means that any transitions from this level are relatively unlikely to occur. In a sense, an electron that happens to find itself in a metastable configuration is trapped there. Since transitions from a metastable state are not impossible (merely less likely), the electron will eventually decay to a less energetic state, typically by an electric quadrupole transition, or often by non-radiative de-excitation (e.g., collisional de-excitation). This slow-decay property of a metastable state is apparent in phosphorescence, the kind of photoluminescence seen in glow-in-the-dark toys that can be charged by first being exposed to bright light. Whereas spontaneous emission in atoms has a typical timescale on the order of 10−8 seconds, the decay of metastable states can typically take milliseconds to minutes, and so light emitted in phosphorescence is usually both weak and long-lasting. Chemistry In chemical systems, a system of atoms or molecules involving a change in chemical bond can be in a metastable state, which lasts for a relatively long period of time. Molecular vibrations and thermal motion make chemical species at the energetic equivalent of the top of a round hill very short-lived. Metastable states that persist for many seconds (or years) are found in energetic valleys which are not the lowest possible valley (point 1 in illustration). A common type of metastability is isomerism. The stability or metastability of a given chemical system depends on its environment, particularly temperature and pressure. The difference between producing a stable vs. metastable entity can have important consequences. For instances, having the wrong crystal polymorph can result in failure of a drug while in storage between manufacture and administration. The map of which state is the most stable as a function of pressure, temperature and/or composition is known as a phase diagram. In regions where a particular state is not the most stable, it may still be metastable. Reaction intermediates are relatively short-lived, and are usually thermodynamically unstable rather than metastable. The IUPAC recommends referring to these as transient rather than metastable. Metastability is also used to refer to specific situations in mass spectrometry and spectrochemistry. Electronic circuits A digital circuit is supposed to be found in a small number of stable digital states within a certain amount of time after an input change. However, if an input changes at the wrong moment a digital circuit which employs feedback (even a simple circuit such as a flip-flop) can enter a metastable state and take an unbounded length of time to finally settle into a fully stable digital state. Computational neuroscience Metastability in the brain is a phenomenon studied in computational neuroscience to elucidate how the human brain recognizes patterns. Here, the term metastability is used rather loosely. There is no lower-energy state, but there are semi-transient signals in the brain that persist for a while and are different than the usual equilibrium state. In Philosophy Gilbert Simondon invokes a notion of metastability for his understanding of systems that rather than resolve their tensions and potentials for transformation into a single final state rather, 'conserves the tensions in the equilibrium of metastability instead of nullifying them in the equilibrium of stability' as a critique of cybernetic notions of homeostasis. See also False vacuum Hysteresis Metastate References Chemical properties Dynamical systems
0.779678
0.990244
0.772071
Integrated assessment modelling
Integrated assessment modelling (IAM) or integrated modelling (IM) is a term used for a type of scientific modelling that tries to link main features of society and economy with the biosphere and atmosphere into one modelling framework. The goal of integrated assessment modelling is to accommodate informed policy-making, usually in the context of climate change though also in other areas of human and social development. While the detail and extent of integrated disciplines varies strongly per model, all climatic integrated assessment modelling includes economic processes as well as processes producing greenhouse gases. Other integrated assessment models also integrate other aspects of human development such as education, health, infrastructure, and governance. These models are integrated because they span multiple academic disciplines, including economics and climate science and for more comprehensive models also energy systems, land-use change, agriculture, infrastructure, conflict, governance, technology, education, and health. The word assessment comes from the use of these models to provide information for answering policy questions. To quantify these integrated assessment studies, numerical models are used. Integrated assessment modelling does not provide predictions for the future but rather estimates what possible scenarios look like. There are different types of integrated assessment models. One classification distinguishes between firstly models that quantify future developmental pathways or scenarios and provide detailed, sectoral information on the complex processes modelled. Here they are called process-based models. Secondly, there are models that aggregate the costs of climate change and climate change mitigation to find estimates of the total costs of climate change. A second classification makes a distinction between models that extrapolate verified patterns (via econometrics equations), or models that determine (globally) optimal economic solutions from the perspective of a social planner, assuming (partial) equilibrium of the economy. Process-based models Intergovernmental Panel on Climate Change (IPCC) has relied on process-based integrated assessment models to quantify mitigation scenarios. They have been used to explore different pathways for staying within climate policy targets such as the 1.5 °C target agreed upon in the Paris Agreement. Moreover, these models have underpinned research including energy policy assessment and simulate the Shared socioeconomic pathways. Notable modelling frameworks include IMAGE, MESSAGEix, AIM/GCE, GCAM, REMIND-MAgPIE, and WITCH-GLOBIOM. While these scenarios are highly policy-relevant, interpretation of the scenarios should be done with care. Non-equilibrium models include those based on econometric equations and evolutionary economics (such as E3ME), and agent-based models (such as the agent-based DSK-model). These models typically do not assume rational and representative agents, nor market equilibrium in the long term. Aggregate cost-benefit models Cost-benefit integrated assessment models are the main tools for calculating the social cost of carbon, or the marginal social cost of emitting one more tonne of carbon (as carbon dioxide) into the atmosphere at any point in time. For instance, the DICE, PAGE, and FUND models have been used by the US Interagency Working Group to calculate the social cost of carbon and its results have been used for regulatory impact analysis. This type of modelling is carried out to find the total cost of climate impacts, which are generally considered a negative externality not captured by conventional markets. In order to correct such a market failure, for instance by using a carbon tax, the cost of emissions is required. However, the estimates of the social cost of carbon are highly uncertain and will remain so for the foreseeable future. It has been argued that "IAM-based analyses of climate policy create a perception of knowledge and precision that is illusory, and can fool policy-makers into thinking that the forecasts the models generate have some kind of scientific legitimacy". Still, it has been argued that attempting to calculate the social cost of carbon is useful to gain insight into the effect of certain processes on climate impacts, as well as to better understand one of the determinants international cooperation in the governance of climate agreements. Integrated assessment models have not been used solely to assess environmental or climate change-related fields. They have also been used to analyze patterns of conflict, the Sustainable Development Goals, trends across issue area in Africa, and food security. Shortcomings All numerical models have shortcomings. Integrated Assessment Models for climate change, in particular, have been severely criticized for problematic assumptions that led to greatly overestimating the cost/benefit ratio for mitigating climate change while relying on economic models inappropriate to the problem. In 2021, the integrated assessment modeling community examined gaps in what was termed the "possibility space" and how these might best be consolidated and addressed. In an October2021 working paper, Nicholas Stern argues that existing IAMs are inherently unable to capture the economic realities of the climate crisis under its current state of rapid progress. Models undertaking optimisation methodologies have received numerous different critiques, a prominent one however, draws on the ideas of dynamical systems theory which understands systems as changing with no deterministic pathway or end-state. This implies a very large, or even infinite, number of possible states of the system in the future with aspects and dynamics that cannot be known to observers of the current state of the system. This type of uncertainty around future states of an evolutionary system has been referred to as ‘radical’ or ‘fundamental’ uncertainty. This has led some researchers to call for more work on the broader array of possible futures and calling for modelling research on those alternative scenarios that have yet to receive substantial attention, for example post-growth scenarios. Notes References External links Integrated Assessment Society Integrated Assessment Journal Climate change policy Environmental science Environmental social science Scientific modelling Management cybernetics
0.791485
0.975352
0.771976
Fields of Science and Technology
Fields of Science and Technology (FOS) is a compulsory classification for statistics of branches of scholarly and technical fields, published by the OECD in 2002. It was created out of the need to interchange data of research facilities, research results etc. It was revised in 2007 under the name Revised Fields of Science and Technology. List Natural sciences Mathematics Computer and information sciences Physical sciences Chemical sciences Earth and related environmental sciences Biological sciences Other natural sciences Engineering and technology Civil engineering Electrical engineering, electronic engineering, information engineering Mechanical engineering Chemical engineering Materials engineering Medical engineering Environmental engineering Systems engineering Environmental biotechnology Industrial biotechnology Nano technology Other engineering and technologies Medical and health sciences Basic medicine Clinical medicine Health sciences Health biotechnology Other medical sciences Agricultural sciences Agriculture, forestry, and fisheries Animal and dairy science Veterinary science Agricultural biotechnology Other agricultural sciences Social science Psychology Economics and business Educational sciences Sociology Law Political science Social and economic geography Media and communications Other social sciences Humanities History and archaeology Languages and literature Philosophy, ethics and religion Arts (arts, history of arts, performing arts, music) Other humanities See also International Standard Classification of Education International Standard Classification of Occupations Wissenschaft – epistemological concept in which serious scholarly works of history, literature, art, and religion are similar to natural sciences References OECD Scientific classification
0.79217
0.974452
0.771932
Evolution
Evolution is the change in the heritable characteristics of biological populations over successive generations. It occurs when evolutionary processes such as natural selection and genetic drift act on genetic variation, resulting in certain characteristics becoming more or less common within a population over successive generations. The process of evolution has given rise to biodiversity at every level of biological organisation. The scientific theory of evolution by natural selection was conceived independently by two British naturalists, Charles Darwin and Alfred Russel Wallace, in the mid-19th century as an explanation for why organisms are adapted to their physical and biological environments. The theory was first set out in detail in Darwin's book On the Origin of Species. Evolution by natural selection is established by observable facts about living organisms: (1) more offspring are often produced than can possibly survive; (2) traits vary among individuals with respect to their morphology, physiology, and behaviour; (3) different traits confer different rates of survival and reproduction (differential fitness); and (4) traits can be passed from generation to generation (heritability of fitness). In successive generations, members of a population are therefore more likely to be replaced by the offspring of parents with favourable characteristics for that environment. In the early 20th century, competing ideas of evolution were refuted and evolution was combined with Mendelian inheritance and population genetics to give rise to modern evolutionary theory. In this synthesis the basis for heredity is in DNA molecules that pass information from generation to generation. The processes that change DNA in a population include natural selection, genetic drift, mutation, and gene flow. All life on Earth—including humanity—shares a last universal common ancestor (LUCA), which lived approximately 3.5–3.8 billion years ago. The fossil record includes a progression from early biogenic graphite to microbial mat fossils to fossilised multicellular organisms. Existing patterns of biodiversity have been shaped by repeated formations of new species (speciation), changes within species (anagenesis), and loss of species (extinction) throughout the evolutionary history of life on Earth. Morphological and biochemical traits tend to be more similar among species that share a more recent common ancestor, which historically was used to reconstruct phylogenetic trees, although direct comparison of genetic sequences is a more common method today. Evolutionary biologists have continued to study various aspects of evolution by forming and testing hypotheses as well as constructing theories based on evidence from the field or laboratory and on data generated by the methods of mathematical and theoretical biology. Their discoveries have influenced not just the development of biology but also other fields including agriculture, medicine, and computer science. Heredity Evolution in organisms occurs through changes in heritable characteristics—the inherited characteristics of an organism. In humans, for example, eye colour is an inherited characteristic and an individual might inherit the "brown-eye trait" from one of their parents. Inherited traits are controlled by genes and the complete set of genes within an organism's genome (genetic material) is called its genotype. The complete set of observable traits that make up the structure and behaviour of an organism is called its phenotype. Some of these traits come from the interaction of its genotype with the environment while others are neutral. Some observable characteristics are not inherited. For example, suntanned skin comes from the interaction between a person's genotype and sunlight; thus, suntans are not passed on to people's children. The phenotype is the ability of the skin to tan when exposed to sunlight. However, some people tan more easily than others, due to differences in genotypic variation; a striking example are people with the inherited trait of albinism, who do not tan at all and are very sensitive to sunburn. Heritable characteristics are passed from one generation to the next via DNA, a molecule that encodes genetic information. DNA is a long biopolymer composed of four types of bases. The sequence of bases along a particular DNA molecule specifies the genetic information, in a manner similar to a sequence of letters spelling out a sentence. Before a cell divides, the DNA is copied, so that each of the resulting two cells will inherit the DNA sequence. Portions of a DNA molecule that specify a single functional unit are called genes; different genes have different sequences of bases. Within cells, each long strand of DNA is called a chromosome. The specific location of a DNA sequence within a chromosome is known as a locus. If the DNA sequence at a locus varies between individuals, the different forms of this sequence are called alleles. DNA sequences can change through mutations, producing new alleles. If a mutation occurs within a gene, the new allele may affect the trait that the gene controls, altering the phenotype of the organism. However, while this simple correspondence between an allele and a trait works in some cases, most traits are influenced by multiple genes in a quantitative or epistatic manner. Sources of variation Evolution can occur if there is genetic variation within a population. Variation comes from mutations in the genome, reshuffling of genes through sexual reproduction and migration between populations (gene flow). Despite the constant introduction of new variation through mutation and gene flow, most of the genome of a species is very similar among all individuals of that species. However, discoveries in the field of evolutionary developmental biology have demonstrated that even relatively small differences in genotype can lead to dramatic differences in phenotype both within and between species. An individual organism's phenotype results from both its genotype and the influence of the environment it has lived in. The modern evolutionary synthesis defines evolution as the change over time in this genetic variation. The frequency of one particular allele will become more or less prevalent relative to other forms of that gene. Variation disappears when a new allele reaches the point of fixation—when it either disappears from the population or replaces the ancestral allele entirely. Mutation Mutations are changes in the DNA sequence of a cell's genome and are the ultimate source of genetic variation in all organisms. When mutations occur, they may alter the product of a gene, or prevent the gene from functioning, or have no effect. About half of the mutations in the coding regions of protein-coding genes are deleterious — the other half are neutral. A small percentage of the total mutations in this region confer a fitness benefit. Some of the mutations in other parts of the genome are deleterious but the vast majority are neutral. A few are beneficial. Mutations can involve large sections of a chromosome becoming duplicated (usually by genetic recombination), which can introduce extra copies of a gene into a genome. Extra copies of genes are a major source of the raw material needed for new genes to evolve. This is important because most new genes evolve within gene families from pre-existing genes that share common ancestors. For example, the human eye uses four genes to make structures that sense light: three for colour vision and one for night vision; all four are descended from a single ancestral gene. New genes can be generated from an ancestral gene when a duplicate copy mutates and acquires a new function. This process is easier once a gene has been duplicated because it increases the redundancy of the system; one gene in the pair can acquire a new function while the other copy continues to perform its original function. Other types of mutations can even generate entirely new genes from previously noncoding DNA, a phenomenon termed de novo gene birth. The generation of new genes can also involve small parts of several genes being duplicated, with these fragments then recombining to form new combinations with new functions (exon shuffling). When new genes are assembled from shuffling pre-existing parts, domains act as modules with simple independent functions, which can be mixed together to produce new combinations with new and complex functions. For example, polyketide synthases are large enzymes that make antibiotics; they contain up to 100 independent domains that each catalyse one step in the overall process, like a step in an assembly line. One example of mutation is wild boar piglets. They are camouflage coloured and show a characteristic pattern of dark and light longitudinal stripes. However, mutations in the melanocortin 1 receptor (MC1R) disrupt the pattern. The majority of pig breeds carry MC1R mutations disrupting wild-type colour and different mutations causing dominant black colouring. Sex and recombination In asexual organisms, genes are inherited together, or linked, as they cannot mix with genes of other organisms during reproduction. In contrast, the offspring of sexual organisms contain random mixtures of their parents' chromosomes that are produced through independent assortment. In a related process called homologous recombination, sexual organisms exchange DNA between two matching chromosomes. Recombination and reassortment do not alter allele frequencies, but instead change which alleles are associated with each other, producing offspring with new combinations of alleles. Sex usually increases genetic variation and may increase the rate of evolution. The two-fold cost of sex was first described by John Maynard Smith. The first cost is that in sexually dimorphic species only one of the two sexes can bear young. This cost does not apply to hermaphroditic species, like most plants and many invertebrates. The second cost is that any individual who reproduces sexually can only pass on 50% of its genes to any individual offspring, with even less passed on as each new generation passes. Yet sexual reproduction is the more common means of reproduction among eukaryotes and multicellular organisms. The Red Queen hypothesis has been used to explain the significance of sexual reproduction as a means to enable continual evolution and adaptation in response to coevolution with other species in an ever-changing environment. Another hypothesis is that sexual reproduction is primarily an adaptation for promoting accurate recombinational repair of damage in germline DNA, and that increased diversity is a byproduct of this process that may sometimes be adaptively beneficial. Gene flow Gene flow is the exchange of genes between populations and between species. It can therefore be a source of variation that is new to a population or to a species. Gene flow can be caused by the movement of individuals between separate populations of organisms, as might be caused by the movement of mice between inland and coastal populations, or the movement of pollen between heavy-metal-tolerant and heavy-metal-sensitive populations of grasses. Gene transfer between species includes the formation of hybrid organisms and horizontal gene transfer. Horizontal gene transfer is the transfer of genetic material from one organism to another organism that is not its offspring; this is most common among bacteria. In medicine, this contributes to the spread of antibiotic resistance, as when one bacteria acquires resistance genes it can rapidly transfer them to other species. Horizontal transfer of genes from bacteria to eukaryotes such as the yeast Saccharomyces cerevisiae and the adzuki bean weevil Callosobruchus chinensis has occurred. An example of larger-scale transfers are the eukaryotic bdelloid rotifers, which have received a range of genes from bacteria, fungi and plants. Viruses can also carry DNA between organisms, allowing transfer of genes even across biological domains. Large-scale gene transfer has also occurred between the ancestors of eukaryotic cells and bacteria, during the acquisition of chloroplasts and mitochondria. It is possible that eukaryotes themselves originated from horizontal gene transfers between bacteria and archaea. Epigenetics Some heritable changes cannot be explained by changes to the sequence of nucleotides in the DNA. These phenomena are classed as epigenetic inheritance systems. DNA methylation marking chromatin, self-sustaining metabolic loops, gene silencing by RNA interference and the three-dimensional conformation of proteins (such as prions) are areas where epigenetic inheritance systems have been discovered at the organismic level. Developmental biologists suggest that complex interactions in genetic networks and communication among cells can lead to heritable variations that may underlay some of the mechanics in developmental plasticity and canalisation. Heritability may also occur at even larger scales. For example, ecological inheritance through the process of niche construction is defined by the regular and repeated activities of organisms in their environment. This generates a legacy of effects that modify and feed back into the selection regime of subsequent generations. Other examples of heritability in evolution that are not under the direct control of genes include the inheritance of cultural traits and symbiogenesis. Evolutionary forces From a neo-Darwinian perspective, evolution occurs when there are changes in the frequencies of alleles within a population of interbreeding organisms, for example, the allele for black colour in a population of moths becoming more common. Mechanisms that can lead to changes in allele frequencies include natural selection, genetic drift, and mutation bias. Natural selection Evolution by natural selection is the process by which traits that enhance survival and reproduction become more common in successive generations of a population. It embodies three principles: Variation exists within populations of organisms with respect to morphology, physiology and behaviour (phenotypic variation). Different traits confer different rates of survival and reproduction (differential fitness). These traits can be passed from generation to generation (heritability of fitness). More offspring are produced than can possibly survive, and these conditions produce competition between organisms for survival and reproduction. Consequently, organisms with traits that give them an advantage over their competitors are more likely to pass on their traits to the next generation than those with traits that do not confer an advantage. This teleonomy is the quality whereby the process of natural selection creates and preserves traits that are seemingly fitted for the functional roles they perform. Consequences of selection include nonrandom mating and genetic hitchhiking. The central concept of natural selection is the evolutionary fitness of an organism. Fitness is measured by an organism's ability to survive and reproduce, which determines the size of its genetic contribution to the next generation. However, fitness is not the same as the total number of offspring: instead fitness is indicated by the proportion of subsequent generations that carry an organism's genes. For example, if an organism could survive well and reproduce rapidly, but its offspring were all too small and weak to survive, this organism would make little genetic contribution to future generations and would thus have low fitness. If an allele increases fitness more than the other alleles of that gene, then with each generation this allele has a higher probability of becoming common within the population. These traits are said to be "selected for." Examples of traits that can increase fitness are enhanced survival and increased fecundity. Conversely, the lower fitness caused by having a less beneficial or deleterious allele results in this allele likely becoming rarer—they are "selected against." Importantly, the fitness of an allele is not a fixed characteristic; if the environment changes, previously neutral or harmful traits may become beneficial and previously beneficial traits become harmful. However, even if the direction of selection does reverse in this way, traits that were lost in the past may not re-evolve in an identical form. However, a re-activation of dormant genes, as long as they have not been eliminated from the genome and were only suppressed perhaps for hundreds of generations, can lead to the re-occurrence of traits thought to be lost like hindlegs in dolphins, teeth in chickens, wings in wingless stick insects, tails and additional nipples in humans etc. "Throwbacks" such as these are known as atavisms. Natural selection within a population for a trait that can vary across a range of values, such as height, can be categorised into three different types. The first is directional selection, which is a shift in the average value of a trait over time—for example, organisms slowly getting taller. Secondly, disruptive selection is selection for extreme trait values and often results in two different values becoming most common, with selection against the average value. This would be when either short or tall organisms had an advantage, but not those of medium height. Finally, in stabilising selection there is selection against extreme trait values on both ends, which causes a decrease in variance around the average value and less diversity. This would, for example, cause organisms to eventually have a similar height. Natural selection most generally makes nature the measure against which individuals and individual traits, are more or less likely to survive. "Nature" in this sense refers to an ecosystem, that is, a system in which organisms interact with every other element, physical as well as biological, in their local environment. Eugene Odum, a founder of ecology, defined an ecosystem as: "Any unit that includes all of the organisms...in a given area interacting with the physical environment so that a flow of energy leads to clearly defined trophic structure, biotic diversity, and material cycles (i.e., exchange of materials between living and nonliving parts) within the system...." Each population within an ecosystem occupies a distinct niche, or position, with distinct relationships to other parts of the system. These relationships involve the life history of the organism, its position in the food chain and its geographic range. This broad understanding of nature enables scientists to delineate specific forces which, together, comprise natural selection. Natural selection can act at different levels of organisation, such as genes, cells, individual organisms, groups of organisms and species. Selection can act at multiple levels simultaneously. An example of selection occurring below the level of the individual organism are genes called transposons, which can replicate and spread throughout a genome. Selection at a level above the individual, such as group selection, may allow the evolution of cooperation. Genetic drift Genetic drift is the random fluctuation of allele frequencies within a population from one generation to the next. When selective forces are absent or relatively weak, allele frequencies are equally likely to drift upward or downward in each successive generation because the alleles are subject to sampling error. This drift halts when an allele eventually becomes fixed, either by disappearing from the population or by replacing the other alleles entirely. Genetic drift may therefore eliminate some alleles from a population due to chance alone. Even in the absence of selective forces, genetic drift can cause two separate populations that begin with the same genetic structure to drift apart into two divergent populations with different sets of alleles. According to the neutral theory of molecular evolution most evolutionary changes are the result of the fixation of neutral mutations by genetic drift. In this model, most genetic changes in a population are thus the result of constant mutation pressure and genetic drift. This form of the neutral theory has been debated since it does not seem to fit some genetic variation seen in nature. A better-supported version of this model is the nearly neutral theory, according to which a mutation that would be effectively neutral in a small population is not necessarily neutral in a large population. Other theories propose that genetic drift is dwarfed by other stochastic forces in evolution, such as genetic hitchhiking, also known as genetic draft. Another concept is constructive neutral evolution (CNE), which explains that complex systems can emerge and spread into a population through neutral transitions due to the principles of excess capacity, presuppression, and ratcheting, and it has been applied in areas ranging from the origins of the spliceosome to the complex interdependence of microbial communities. The time it takes a neutral allele to become fixed by genetic drift depends on population size; fixation is more rapid in smaller populations. The number of individuals in a population is not critical, but instead a measure known as the effective population size. The effective population is usually smaller than the total population since it takes into account factors such as the level of inbreeding and the stage of the lifecycle in which the population is the smallest. The effective population size may not be the same for every gene in the same population. It is usually difficult to measure the relative importance of selection and neutral processes, including drift. The comparative importance of adaptive and non-adaptive forces in driving evolutionary change is an area of current research. Mutation bias Mutation bias is usually conceived as a difference in expected rates for two different kinds of mutation, e.g., transition-transversion bias, GC-AT bias, deletion-insertion bias. This is related to the idea of developmental bias. Haldane and Fisher argued that, because mutation is a weak pressure easily overcome by selection, tendencies of mutation would be ineffectual except under conditions of neutral evolution or extraordinarily high mutation rates. This opposing-pressures argument was long used to dismiss the possibility of internal tendencies in evolution, until the molecular era prompted renewed interest in neutral evolution. Noboru Sueoka and Ernst Freese proposed that systematic biases in mutation might be responsible for systematic differences in genomic GC composition between species. The identification of a GC-biased E. coli mutator strain in 1967, along with the proposal of the neutral theory, established the plausibility of mutational explanations for molecular patterns, which are now common in the molecular evolution literature. For instance, mutation biases are frequently invoked in models of codon usage. Such models also include effects of selection, following the mutation-selection-drift model, which allows both for mutation biases and differential selection based on effects on translation. Hypotheses of mutation bias have played an important role in the development of thinking about the evolution of genome composition, including isochores. Different insertion vs. deletion biases in different taxa can lead to the evolution of different genome sizes. The hypothesis of Lynch regarding genome size relies on mutational biases toward increase or decrease in genome size. However, mutational hypotheses for the evolution of composition suffered a reduction in scope when it was discovered that (1) GC-biased gene conversion makes an important contribution to composition in diploid organisms such as mammals and (2) bacterial genomes frequently have AT-biased mutation. Contemporary thinking about the role of mutation biases reflects a different theory from that of Haldane and Fisher. More recent work showed that the original "pressures" theory assumes that evolution is based on standing variation: when evolution depends on events of mutation that introduce new alleles, mutational and developmental biases in the introduction of variation (arrival biases) can impose biases on evolution without requiring neutral evolution or high mutation rates. Several studies report that the mutations implicated in adaptation reflect common mutation biases though others dispute this interpretation. Genetic hitchhiking Recombination allows alleles on the same strand of DNA to become separated. However, the rate of recombination is low (approximately two events per chromosome per generation). As a result, genes close together on a chromosome may not always be shuffled away from each other and genes that are close together tend to be inherited together, a phenomenon known as linkage. This tendency is measured by finding how often two alleles occur together on a single chromosome compared to expectations, which is called their linkage disequilibrium. A set of alleles that is usually inherited in a group is called a haplotype. This can be important when one allele in a particular haplotype is strongly beneficial: natural selection can drive a selective sweep that will also cause the other alleles in the haplotype to become more common in the population; this effect is called genetic hitchhiking or genetic draft. Genetic draft caused by the fact that some neutral genes are genetically linked to others that are under selection can be partially captured by an appropriate effective population size. Sexual selection A special case of natural selection is sexual selection, which is selection for any trait that increases mating success by increasing the attractiveness of an organism to potential mates. Traits that evolved through sexual selection are particularly prominent among males of several animal species. Although sexually favoured, traits such as cumbersome antlers, mating calls, large body size and bright colours often attract predation, which compromises the survival of individual males. This survival disadvantage is balanced by higher reproductive success in males that show these hard-to-fake, sexually selected traits. Natural outcomes Evolution influences every aspect of the form and behaviour of organisms. Most prominent are the specific behavioural and physical adaptations that are the outcome of natural selection. These adaptations increase fitness by aiding activities such as finding food, avoiding predators or attracting mates. Organisms can also respond to selection by cooperating with each other, usually by aiding their relatives or engaging in mutually beneficial symbiosis. In the longer term, evolution produces new species through splitting ancestral populations of organisms into new groups that cannot or will not interbreed. These outcomes of evolution are distinguished based on time scale as macroevolution versus microevolution. Macroevolution refers to evolution that occurs at or above the level of species, in particular speciation and extinction, whereas microevolution refers to smaller evolutionary changes within a species or population, in particular shifts in allele frequency and adaptation. Macroevolution is the outcome of long periods of microevolution. Thus, the distinction between micro- and macroevolution is not a fundamental one—the difference is simply the time involved. However, in macroevolution, the traits of the entire species may be important. For instance, a large amount of variation among individuals allows a species to rapidly adapt to new habitats, lessening the chance of it going extinct, while a wide geographic range increases the chance of speciation, by making it more likely that part of the population will become isolated. In this sense, microevolution and macroevolution might involve selection at different levels—with microevolution acting on genes and organisms, versus macroevolutionary processes such as species selection acting on entire species and affecting their rates of speciation and extinction. A common misconception is that evolution has goals, long-term plans, or an innate tendency for "progress", as expressed in beliefs such as orthogenesis and evolutionism; realistically, however, evolution has no long-term goal and does not necessarily produce greater complexity. Although complex species have evolved, they occur as a side effect of the overall number of organisms increasing, and simple forms of life still remain more common in the biosphere. For example, the overwhelming majority of species are microscopic prokaryotes, which form about half the world's biomass despite their small size and constitute the vast majority of Earth's biodiversity. Simple organisms have therefore been the dominant form of life on Earth throughout its history and continue to be the main form of life up to the present day, with complex life only appearing more diverse because it is more noticeable. Indeed, the evolution of microorganisms is particularly important to evolutionary research since their rapid reproduction allows the study of experimental evolution and the observation of evolution and adaptation in real time. Adaptation Adaptation is the process that makes organisms better suited to their habitat. Also, the term adaptation may refer to a trait that is important for an organism's survival. For example, the adaptation of horses' teeth to the grinding of grass. By using the term adaptation for the evolutionary process and adaptive trait for the product (the bodily part or function), the two senses of the word may be distinguished. Adaptations are produced by natural selection. The following definitions are due to Theodosius Dobzhansky: Adaptation is the evolutionary process whereby an organism becomes better able to live in its habitat or habitats. Adaptedness is the state of being adapted: the degree to which an organism is able to live and reproduce in a given set of habitats. An adaptive trait is an aspect of the developmental pattern of the organism which enables or enhances the probability of that organism surviving and reproducing. Adaptation may cause either the gain of a new feature, or the loss of an ancestral feature. An example that shows both types of change is bacterial adaptation to antibiotic selection, with genetic changes causing antibiotic resistance by both modifying the target of the drug, or increasing the activity of transporters that pump the drug out of the cell. Other striking examples are the bacteria Escherichia coli evolving the ability to use citric acid as a nutrient in a long-term laboratory experiment, Flavobacterium evolving a novel enzyme that allows these bacteria to grow on the by-products of nylon manufacturing, and the soil bacterium Sphingobium evolving an entirely new metabolic pathway that degrades the synthetic pesticide pentachlorophenol. An interesting but still controversial idea is that some adaptations might increase the ability of organisms to generate genetic diversity and adapt by natural selection (increasing organisms' evolvability). Adaptation occurs through the gradual modification of existing structures. Consequently, structures with similar internal organisation may have different functions in related organisms. This is the result of a single ancestral structure being adapted to function in different ways. The bones within bat wings, for example, are very similar to those in mice feet and primate hands, due to the descent of all these structures from a common mammalian ancestor. However, since all living organisms are related to some extent, even organs that appear to have little or no structural similarity, such as arthropod, squid and vertebrate eyes, or the limbs and wings of arthropods and vertebrates, can depend on a common set of homologous genes that control their assembly and function; this is called deep homology. During evolution, some structures may lose their original function and become vestigial structures. Such structures may have little or no function in a current species, yet have a clear function in ancestral species, or other closely related species. Examples include pseudogenes, the non-functional remains of eyes in blind cave-dwelling fish, wings in flightless birds, the presence of hip bones in whales and snakes, and sexual traits in organisms that reproduce via asexual reproduction. Examples of vestigial structures in humans include wisdom teeth, the coccyx, the vermiform appendix, and other behavioural vestiges such as goose bumps and primitive reflexes. However, many traits that appear to be simple adaptations are in fact exaptations: structures originally adapted for one function, but which coincidentally became somewhat useful for some other function in the process. One example is the African lizard Holaspis guentheri, which developed an extremely flat head for hiding in crevices, as can be seen by looking at its near relatives. However, in this species, the head has become so flattened that it assists in gliding from tree to tree—an exaptation. Within cells, molecular machines such as the bacterial flagella and protein sorting machinery evolved by the recruitment of several pre-existing proteins that previously had different functions. Another example is the recruitment of enzymes from glycolysis and xenobiotic metabolism to serve as structural proteins called crystallins within the lenses of organisms' eyes. An area of current investigation in evolutionary developmental biology is the developmental basis of adaptations and exaptations. This research addresses the origin and evolution of embryonic development and how modifications of development and developmental processes produce novel features. These studies have shown that evolution can alter development to produce new structures, such as embryonic bone structures that develop into the jaw in other animals instead forming part of the middle ear in mammals. It is also possible for structures that have been lost in evolution to reappear due to changes in developmental genes, such as a mutation in chickens causing embryos to grow teeth similar to those of crocodiles. It is now becoming clear that most alterations in the form of organisms are due to changes in a small set of conserved genes. Coevolution Interactions between organisms can produce both conflict and cooperation. When the interaction is between pairs of species, such as a pathogen and a host, or a predator and its prey, these species can develop matched sets of adaptations. Here, the evolution of one species causes adaptations in a second species. These changes in the second species then, in turn, cause new adaptations in the first species. This cycle of selection and response is called coevolution. An example is the production of tetrodotoxin in the rough-skinned newt and the evolution of tetrodotoxin resistance in its predator, the common garter snake. In this predator-prey pair, an evolutionary arms race has produced high levels of toxin in the newt and correspondingly high levels of toxin resistance in the snake. Cooperation Not all co-evolved interactions between species involve conflict. Many cases of mutually beneficial interactions have evolved. For instance, an extreme cooperation exists between plants and the mycorrhizal fungi that grow on their roots and aid the plant in absorbing nutrients from the soil. This is a reciprocal relationship as the plants provide the fungi with sugars from photosynthesis. Here, the fungi actually grow inside plant cells, allowing them to exchange nutrients with their hosts, while sending signals that suppress the plant immune system. Coalitions between organisms of the same species have also evolved. An extreme case is the eusociality found in social insects, such as bees, termites and ants, where sterile insects feed and guard the small number of organisms in a colony that are able to reproduce. On an even smaller scale, the somatic cells that make up the body of an animal limit their reproduction so they can maintain a stable organism, which then supports a small number of the animal's germ cells to produce offspring. Here, somatic cells respond to specific signals that instruct them whether to grow, remain as they are, or die. If cells ignore these signals and multiply inappropriately, their uncontrolled growth causes cancer. Such cooperation within species may have evolved through the process of kin selection, which is where one organism acts to help raise a relative's offspring. This activity is selected for because if the helping individual contains alleles which promote the helping activity, it is likely that its kin will also contain these alleles and thus those alleles will be passed on. Other processes that may promote cooperation include group selection, where cooperation provides benefits to a group of organisms. Speciation Speciation is the process where a species diverges into two or more descendant species. There are multiple ways to define the concept of "species". The choice of definition is dependent on the particularities of the species concerned. For example, some species concepts apply more readily toward sexually reproducing organisms while others lend themselves better toward asexual organisms. Despite the diversity of various species concepts, these various concepts can be placed into one of three broad philosophical approaches: interbreeding, ecological and phylogenetic. The Biological Species Concept (BSC) is a classic example of the interbreeding approach. Defined by evolutionary biologist Ernst Mayr in 1942, the BSC states that "species are groups of actually or potentially interbreeding natural populations, which are reproductively isolated from other such groups." Despite its wide and long-term use, the BSC like other species concepts is not without controversy, for example, because genetic recombination among prokaryotes is not an intrinsic aspect of reproduction; this is called the species problem. Some researchers have attempted a unifying monistic definition of species, while others adopt a pluralistic approach and suggest that there may be different ways to logically interpret the definition of a species. Barriers to reproduction between two diverging sexual populations are required for the populations to become new species. Gene flow may slow this process by spreading the new genetic variants also to the other populations. Depending on how far two species have diverged since their most recent common ancestor, it may still be possible for them to produce offspring, as with horses and donkeys mating to produce mules. Such hybrids are generally infertile. In this case, closely related species may regularly interbreed, but hybrids will be selected against and the species will remain distinct. However, viable hybrids are occasionally formed and these new species can either have properties intermediate between their parent species, or possess a totally new phenotype. The importance of hybridisation in producing new species of animals is unclear, although cases have been seen in many types of animals, with the gray tree frog being a particularly well-studied example. Speciation has been observed multiple times under both controlled laboratory conditions and in nature. In sexually reproducing organisms, speciation results from reproductive isolation followed by genealogical divergence. There are four primary geographic modes of speciation. The most common in animals is allopatric speciation, which occurs in populations initially isolated geographically, such as by habitat fragmentation or migration. Selection under these conditions can produce very rapid changes in the appearance and behaviour of organisms. As selection and drift act independently on populations isolated from the rest of their species, separation may eventually produce organisms that cannot interbreed. The second mode of speciation is peripatric speciation, which occurs when small populations of organisms become isolated in a new environment. This differs from allopatric speciation in that the isolated populations are numerically much smaller than the parental population. Here, the founder effect causes rapid speciation after an increase in inbreeding increases selection on homozygotes, leading to rapid genetic change. The third mode is parapatric speciation. This is similar to peripatric speciation in that a small population enters a new habitat, but differs in that there is no physical separation between these two populations. Instead, speciation results from the evolution of mechanisms that reduce gene flow between the two populations. Generally this occurs when there has been a drastic change in the environment within the parental species' habitat. One example is the grass Anthoxanthum odoratum, which can undergo parapatric speciation in response to localised metal pollution from mines. Here, plants evolve that have resistance to high levels of metals in the soil. Selection against interbreeding with the metal-sensitive parental population produced a gradual change in the flowering time of the metal-resistant plants, which eventually produced complete reproductive isolation. Selection against hybrids between the two populations may cause reinforcement, which is the evolution of traits that promote mating within a species, as well as character displacement, which is when two species become more distinct in appearance. Finally, in sympatric speciation species diverge without geographic isolation or changes in habitat. This form is rare since even a small amount of gene flow may remove genetic differences between parts of a population. Generally, sympatric speciation in animals requires the evolution of both genetic differences and nonrandom mating, to allow reproductive isolation to evolve. One type of sympatric speciation involves crossbreeding of two related species to produce a new hybrid species. This is not common in animals as animal hybrids are usually sterile. This is because during meiosis the homologous chromosomes from each parent are from different species and cannot successfully pair. However, it is more common in plants because plants often double their number of chromosomes, to form polyploids. This allows the chromosomes from each parental species to form matching pairs during meiosis, since each parent's chromosomes are represented by a pair already. An example of such a speciation event is when the plant species Arabidopsis thaliana and Arabidopsis arenosa crossbred to give the new species Arabidopsis suecica. This happened about 20,000 years ago, and the speciation process has been repeated in the laboratory, which allows the study of the genetic mechanisms involved in this process. Indeed, chromosome doubling within a species may be a common cause of reproductive isolation, as half the doubled chromosomes will be unmatched when breeding with undoubled organisms. Speciation events are important in the theory of punctuated equilibrium, which accounts for the pattern in the fossil record of short "bursts" of evolution interspersed with relatively long periods of stasis, where species remain relatively unchanged. In this theory, speciation and rapid evolution are linked, with natural selection and genetic drift acting most strongly on organisms undergoing speciation in novel habitats or small populations. As a result, the periods of stasis in the fossil record correspond to the parental population and the organisms undergoing speciation and rapid evolution are found in small populations or geographically restricted habitats and therefore rarely being preserved as fossils. Extinction Extinction is the disappearance of an entire species. Extinction is not an unusual event, as species regularly appear through speciation and disappear through extinction. Nearly all animal and plant species that have lived on Earth are now extinct, and extinction appears to be the ultimate fate of all species. These extinctions have happened continuously throughout the history of life, although the rate of extinction spikes in occasional mass extinction events. The Cretaceous–Paleogene extinction event, during which the non-avian dinosaurs became extinct, is the most well-known, but the earlier Permian–Triassic extinction event was even more severe, with approximately 96% of all marine species driven to extinction. The Holocene extinction event is an ongoing mass extinction associated with humanity's expansion across the globe over the past few thousand years. Present-day extinction rates are 100–1000 times greater than the background rate and up to 30% of current species may be extinct by the mid 21st century. Human activities are now the primary cause of the ongoing extinction event; global warming may further accelerate it in the future. Despite the estimated extinction of more than 99% of all species that ever lived on Earth, about 1 trillion species are estimated to be on Earth currently with only one-thousandth of 1% described. The role of extinction in evolution is not very well understood and may depend on which type of extinction is considered. The causes of the continuous "low-level" extinction events, which form the majority of extinctions, may be the result of competition between species for limited resources (the competitive exclusion principle). If one species can out-compete another, this could produce species selection, with the fitter species surviving and the other species being driven to extinction. The intermittent mass extinctions are also important, but instead of acting as a selective force, they drastically reduce diversity in a nonspecific manner and promote bursts of rapid evolution and speciation in survivors. Applications Concepts and models used in evolutionary biology, such as natural selection, have many applications. Artificial selection is the intentional selection of traits in a population of organisms. This has been used for thousands of years in the domestication of plants and animals. More recently, such selection has become a vital part of genetic engineering, with selectable markers such as antibiotic resistance genes being used to manipulate DNA. Proteins with valuable properties have evolved by repeated rounds of mutation and selection (for example modified enzymes and new antibodies) in a process called directed evolution. Understanding the changes that have occurred during an organism's evolution can reveal the genes needed to construct parts of the body, genes which may be involved in human genetic disorders. For example, the Mexican tetra is an albino cavefish that lost its eyesight during evolution. Breeding together different populations of this blind fish produced some offspring with functional eyes, since different mutations had occurred in the isolated populations that had evolved in different caves. This helped identify genes required for vision and pigmentation. Evolutionary theory has many applications in medicine. Many human diseases are not static phenomena, but capable of evolution. Viruses, bacteria, fungi and cancers evolve to be resistant to host immune defences, as well as to pharmaceutical drugs. These same problems occur in agriculture with pesticide and herbicide resistance. It is possible that we are facing the end of the effective life of most of available antibiotics and predicting the evolution and evolvability of our pathogens and devising strategies to slow or circumvent it is requiring deeper knowledge of the complex forces driving evolution at the molecular level. In computer science, simulations of evolution using evolutionary algorithms and artificial life started in the 1960s and were extended with simulation of artificial selection. Artificial evolution became a widely recognised optimisation method as a result of the work of Ingo Rechenberg in the 1960s. He used evolution strategies to solve complex engineering problems. Genetic algorithms in particular became popular through the writing of John Henry Holland. Practical applications also include automatic evolution of computer programmes. Evolutionary algorithms are now used to solve multi-dimensional problems more efficiently than software produced by human designers and also to optimise the design of systems. Evolutionary history of life Origin of life The Earth is about 4.54 billion years old. The earliest undisputed evidence of life on Earth dates from at least 3.5 billion years ago, during the Eoarchean Era after a geological crust started to solidify following the earlier molten Hadean Eon. Microbial mat fossils have been found in 3.48 billion-year-old sandstone in Western Australia. Other early physical evidence of a biogenic substance is graphite in 3.7 billion-year-old metasedimentary rocks discovered in Western Greenland as well as "remains of biotic life" found in 4.1 billion-year-old rocks in Western Australia. Commenting on the Australian findings, Stephen Blair Hedges wrote: "If life arose relatively quickly on Earth, then it could be common in the universe." In July 2016, scientists reported identifying a set of 355 genes from the last universal common ancestor (LUCA) of all organisms living on Earth. More than 99% of all species, amounting to over five billion species, that ever lived on Earth are estimated to be extinct. Estimates on the number of Earth's current species range from 10 million to 14 million, of which about 1.9 million are estimated to have been named and 1.6 million documented in a central database to date, leaving at least 80% not yet described. Highly energetic chemistry is thought to have produced a self-replicating molecule around 4 billion years ago, and half a billion years later the last common ancestor of all life existed. The current scientific consensus is that the complex biochemistry that makes up life came from simpler chemical reactions. The beginning of life may have included self-replicating molecules such as RNA and the assembly of simple cells. Common descent All organisms on Earth are descended from a common ancestor or ancestral gene pool. Current species are a stage in the process of evolution, with their diversity the product of a long series of speciation and extinction events. The common descent of organisms was first deduced from four simple facts about organisms: First, they have geographic distributions that cannot be explained by local adaptation. Second, the diversity of life is not a set of completely unique organisms, but organisms that share morphological similarities. Third, vestigial traits with no clear purpose resemble functional ancestral traits. Fourth, organisms can be classified using these similarities into a hierarchy of nested groups, similar to a family tree. Due to horizontal gene transfer, this "tree of life" may be more complicated than a simple branching tree, since some genes have spread independently between distantly related species. To solve this problem and others, some authors prefer to use the "Coral of life" as a metaphor or a mathematical model to illustrate the evolution of life. This view dates back to an idea briefly mentioned by Darwin but later abandoned. Past species have also left records of their evolutionary history. Fossils, along with the comparative anatomy of present-day organisms, constitute the morphological, or anatomical, record. By comparing the anatomies of both modern and extinct species, palaeontologists can infer the lineages of those species. However, this approach is most successful for organisms that had hard body parts, such as shells, bones or teeth. Further, as prokaryotes such as bacteria and archaea share a limited set of common morphologies, their fossils do not provide information on their ancestry. More recently, evidence for common descent has come from the study of biochemical similarities between organisms. For example, all living cells use the same basic set of nucleotides and amino acids. The development of molecular genetics has revealed the record of evolution left in organisms' genomes: dating when species diverged through the molecular clock produced by mutations. For example, these DNA sequence comparisons have revealed that humans and chimpanzees share 98% of their genomes and analysing the few areas where they differ helps shed light on when the common ancestor of these species existed. Evolution of life Prokaryotes inhabited the Earth from approximately 3–4 billion years ago. No obvious changes in morphology or cellular organisation occurred in these organisms over the next few billion years. The eukaryotic cells emerged between 1.6 and 2.7 billion years ago. The next major change in cell structure came when bacteria were engulfed by eukaryotic cells, in a cooperative association called endosymbiosis. The engulfed bacteria and the host cell then underwent coevolution, with the bacteria evolving into either mitochondria or hydrogenosomes. Another engulfment of cyanobacterial-like organisms led to the formation of chloroplasts in algae and plants. The history of life was that of the unicellular eukaryotes, prokaryotes and archaea until about 610 million years ago when multicellular organisms began to appear in the oceans in the Ediacaran period. The evolution of multicellularity occurred in multiple independent events, in organisms as diverse as sponges, brown algae, cyanobacteria, slime moulds and myxobacteria. In January 2016, scientists reported that, about 800 million years ago, a minor genetic change in a single molecule called GK-PID may have allowed organisms to go from a single cell organism to one of many cells. Soon after the emergence of these first multicellular organisms, a remarkable amount of biological diversity appeared over approximately 10 million years, in an event called the Cambrian explosion. Here, the majority of types of modern animals appeared in the fossil record, as well as unique lineages that subsequently became extinct. Various triggers for the Cambrian explosion have been proposed, including the accumulation of oxygen in the atmosphere from photosynthesis. About 500 million years ago, plants and fungi colonised the land and were soon followed by arthropods and other animals. Insects were particularly successful and even today make up the majority of animal species. Amphibians first appeared around 364 million years ago, followed by early amniotes and birds around 155 million years ago (both from "reptile"-like lineages), mammals around 129 million years ago, Homininae around 10 million years ago and modern humans around 250,000 years ago. However, despite the evolution of these large animals, smaller organisms similar to the types that evolved early in this process continue to be highly successful and dominate the Earth, with the majority of both biomass and species being prokaryotes. History of evolutionary thought Classical antiquity The proposal that one type of organism could descend from another type goes back to some of the first pre-Socratic Greek philosophers, such as Anaximander and Empedocles. Such proposals survived into Roman times. The poet and philosopher Lucretius followed Empedocles in his masterwork De rerum natura. Middle Ages In contrast to these materialistic views, Aristotelianism had considered all natural things as actualisations of fixed natural possibilities, known as forms. This became part of a medieval teleological understanding of nature in which all things have an intended role to play in a divine cosmic order. Variations of this idea became the standard understanding of the Middle Ages and were integrated into Christian learning, but Aristotle did not demand that real types of organisms always correspond one-for-one with exact metaphysical forms and specifically gave examples of how new types of living things could come to be. A number of Arab Muslim scholars wrote about evolution, most notably Ibn Khaldun, who wrote the book Muqaddimah in 1377 AD, in which he asserted that humans developed from "the world of the monkeys", in a process by which "species become more numerous". Pre-Darwinian The "New Science" of the 17th century rejected the Aristotelian approach. It sought to explain natural phenomena in terms of physical laws that were the same for all visible things and that did not require the existence of any fixed natural categories or divine cosmic order. However, this new approach was slow to take root in the biological sciences: the last bastion of the concept of fixed natural types. John Ray applied one of the previously more general terms for fixed natural types, "species", to plant and animal types, but he strictly identified each type of living thing as a species and proposed that each species could be defined by the features that perpetuated themselves generation after generation. The biological classification introduced by Carl Linnaeus in 1735 explicitly recognised the hierarchical nature of species relationships, but still viewed species as fixed according to a divine plan. Other naturalists of this time speculated on the evolutionary change of species over time according to natural laws. In 1751, Pierre Louis Maupertuis wrote of natural modifications occurring during reproduction and accumulating over many generations to produce new species. Georges-Louis Leclerc, Comte de Buffon, suggested that species could degenerate into different organisms, and Erasmus Darwin proposed that all warm-blooded animals could have descended from a single microorganism (or "filament"). The first full-fledged evolutionary scheme was Jean-Baptiste Lamarck's "transmutation" theory of 1809, which envisaged spontaneous generation continually producing simple forms of life that developed greater complexity in parallel lineages with an inherent progressive tendency, and postulated that on a local level, these lineages adapted to the environment by inheriting changes caused by their use or disuse in parents. (The latter process was later called Lamarckism.) These ideas were condemned by established naturalists as speculation lacking empirical support. In particular, Georges Cuvier insisted that species were unrelated and fixed, their similarities reflecting divine design for functional needs. In the meantime, Ray's ideas of benevolent design had been developed by William Paley into the Natural Theology or Evidences of the Existence and Attributes of the Deity (1802), which proposed complex adaptations as evidence of divine design and which was admired by Charles Darwin. Darwinian revolution The crucial break from the concept of constant typological classes or types in biology came with the theory of evolution through natural selection, which was formulated by Charles Darwin and Alfred Wallace in terms of variable populations. Darwin used the expression "descent with modification" rather than "evolution". Partly influenced by An Essay on the Principle of Population (1798) by Thomas Robert Malthus, Darwin noted that population growth would lead to a "struggle for existence" in which favourable variations prevailed as others perished. In each generation, many offspring fail to survive to an age of reproduction because of limited resources. This could explain the diversity of plants and animals from a common ancestry through the working of natural laws in the same way for all types of organism. Darwin developed his theory of "natural selection" from 1838 onwards and was writing up his "big book" on the subject when Alfred Russel Wallace sent him a version of virtually the same theory in 1858. Their separate papers were presented together at an 1858 meeting of the Linnean Society of London. At the end of 1859, Darwin's publication of his "abstract" as On the Origin of Species explained natural selection in detail and in a way that led to an increasingly wide acceptance of Darwin's concepts of evolution at the expense of alternative theories. Thomas Henry Huxley applied Darwin's ideas to humans, using paleontology and comparative anatomy to provide strong evidence that humans and apes shared a common ancestry. Some were disturbed by this since it implied that humans did not have a special place in the universe. Pangenesis and heredity The mechanisms of reproductive heritability and the origin of new traits remained a mystery. Towards this end, Darwin developed his provisional theory of pangenesis. In 1865, Gregor Mendel reported that traits were inherited in a predictable manner through the independent assortment and segregation of elements (later known as genes). Mendel's laws of inheritance eventually supplanted most of Darwin's pangenesis theory. August Weismann made the important distinction between germ cells that give rise to gametes (such as sperm and egg cells) and the somatic cells of the body, demonstrating that heredity passes through the germ line only. Hugo de Vries connected Darwin's pangenesis theory to Weismann's germ/soma cell distinction and proposed that Darwin's pangenes were concentrated in the cell nucleus and when expressed they could move into the cytoplasm to change the cell's structure. De Vries was also one of the researchers who made Mendel's work well known, believing that Mendelian traits corresponded to the transfer of heritable variations along the germline. To explain how new variants originate, de Vries developed a mutation theory that led to a temporary rift between those who accepted Darwinian evolution and biometricians who allied with de Vries. In the 1930s, pioneers in the field of population genetics, such as Ronald Fisher, Sewall Wright and J. B. S. Haldane set the foundations of evolution onto a robust statistical philosophy. The false contradiction between Darwin's theory, genetic mutations, and Mendelian inheritance was thus reconciled. The 'modern synthesis' In the 1920s and 1930s, the modern synthesis connected natural selection and population genetics, based on Mendelian inheritance, into a unified theory that included random genetic drift, mutation, and gene flow. This new version of evolutionary theory focused on changes in allele frequencies in population. It explained patterns observed across species in populations, through fossil transitions in palaeontology. Further syntheses Since then, further syntheses have extended evolution's explanatory power in the light of numerous discoveries, to cover biological phenomena across the whole of the biological hierarchy from genes to populations. The publication of the structure of DNA by James Watson and Francis Crick with contribution of Rosalind Franklin in 1953 demonstrated a physical mechanism for inheritance. Molecular biology improved understanding of the relationship between genotype and phenotype. Advances were also made in phylogenetic systematics, mapping the transition of traits into a comparative and testable framework through the publication and use of evolutionary trees. In 1973, evolutionary biologist Theodosius Dobzhansky penned that "nothing in biology makes sense except in the light of evolution", because it has brought to light the relations of what first seemed disjointed facts in natural history into a coherent explanatory body of knowledge that describes and predicts many observable facts about life on this planet. One extension, known as evolutionary developmental biology and informally called "evo-devo", emphasises how changes between generations (evolution) act on patterns of change within individual organisms (development). Since the beginning of the 21st century, some biologists have argued for an extended evolutionary synthesis, which would account for the effects of non-genetic inheritance modes, such as epigenetics, parental effects, ecological inheritance and cultural inheritance, and evolvability. Social and cultural responses In the 19th century, particularly after the publication of On the Origin of Species in 1859, the idea that life had evolved was an active source of academic debate centred on the philosophical, social and religious implications of evolution. Today, the modern evolutionary synthesis is accepted by a vast majority of scientists. However, evolution remains a contentious concept for some theists. While various religions and denominations have reconciled their beliefs with evolution through concepts such as theistic evolution, there are creationists who believe that evolution is contradicted by the creation myths found in their religions and who raise various objections to evolution. As had been demonstrated by responses to the publication of Vestiges of the Natural History of Creation in 1844, the most controversial aspect of evolutionary biology is the implication of human evolution that humans share common ancestry with apes and that the mental and moral faculties of humanity have the same types of natural causes as other inherited traits in animals. In some countries, notably the United States, these tensions between science and religion have fuelled the current creation–evolution controversy, a religious conflict focusing on politics and public education. While other scientific fields such as cosmology and Earth science also conflict with literal interpretations of many religious texts, evolutionary biology experiences significantly more opposition from religious literalists. The teaching of evolution in American secondary school biology classes was uncommon in most of the first half of the 20th century. The Scopes Trial decision of 1925 caused the subject to become very rare in American secondary biology textbooks for a generation, but it was gradually re-introduced later and became legally protected with the 1968 Epperson v. Arkansas decision. Since then, the competing religious belief of creationism was legally disallowed in secondary school curricula in various decisions in the 1970s and 1980s, but it returned in pseudoscientific form as intelligent design (ID), to be excluded once again in the 2005 Kitzmiller v. Dover Area School District case. The debate over Darwin's ideas did not generate significant controversy in China. See also Chronospecies References Bibliography The notebook is available from The Complete Work of Charles Darwin Online . Retrieved 2019-10-09. The book is available from The Complete Work of Charles Darwin Online . Retrieved 2014-11-21. "Proceedings of a symposium held at the American Museum of Natural History in New York, 2002." . Retrieved 2014-11-29. "Papers from the Symposium on the Limits of Reductionism in Biology, held at the Novartis Foundation, London, May 13–15, 1997." "Based on a conference held in Bellagio, Italy, June 25–30, 1989" Further reading Introductory reading American version. Advanced reading External links General information Adobe Flash required. "History of Evolution in the United States". Salon. Retrieved 2021-08-24. Experiments Online lectures Biology theories
0.772392
0.999394
0.771924
Molecular mechanics
Molecular mechanics uses classical mechanics to model molecular systems. The Born–Oppenheimer approximation is assumed valid and the potential energy of all systems is calculated as a function of the nuclear coordinates using force fields. Molecular mechanics can be used to study molecule systems ranging in size and complexity from small to large biological systems or material assemblies with many thousands to millions of atoms. All-atomistic molecular mechanics methods have the following properties: Each atom is simulated as one particle Each particle is assigned a radius (typically the van der Waals radius), polarizability, and a constant net charge (generally derived from quantum calculations and/or experiment) Bonded interactions are treated as springs with an equilibrium distance equal to the experimental or calculated bond length Variants on this theme are possible. For example, many simulations have historically used a united-atom representation in which each terminal methyl group or intermediate methylene unit was considered one particle, and large protein systems are commonly simulated using a bead model that assigns two to four particles per amino acid. Functional form The following functional abstraction, termed an interatomic potential function or force field in chemistry, calculates the molecular system's potential energy (E) in a given conformation as a sum of individual energy terms. where the components of the covalent and noncovalent contributions are given by the following summations: The exact functional form of the potential function, or force field, depends on the particular simulation program being used. Generally the bond and angle terms are modeled as harmonic potentials centered around equilibrium bond-length values derived from experiment or theoretical calculations of electronic structure performed with software which does ab-initio type calculations such as Gaussian. For accurate reproduction of vibrational spectra, the Morse potential can be used instead, at computational cost. The dihedral or torsional terms typically have multiple minima and thus cannot be modeled as harmonic oscillators, though their specific functional form varies with the implementation. This class of terms may include improper dihedral terms, which function as correction factors for out-of-plane deviations (for example, they can be used to keep benzene rings planar, or correct geometry and chirality of tetrahedral atoms in a united-atom representation). The non-bonded terms are much more computationally costly to calculate in full, since a typical atom is bonded to only a few of its neighbors, but interacts with every other atom in the molecule. Fortunately the van der Waals term falls off rapidly. It is typically modeled using a 6–12 Lennard-Jones potential, which means that attractive forces fall off with distance as r−6 and repulsive forces as r−12, where r represents the distance between two atoms. The repulsive part r−12 is however unphysical, because repulsion increases exponentially. Description of van der Waals forces by the Lennard-Jones 6–12 potential introduces inaccuracies, which become significant at short distances. Generally a cutoff radius is used to speed up the calculation so that atom pairs which distances are greater than the cutoff have a van der Waals interaction energy of zero. The electrostatic terms are notoriously difficult to calculate well because they do not fall off rapidly with distance, and long-range electrostatic interactions are often important features of the system under study (especially for proteins). The basic functional form is the Coulomb potential, which only falls off as r−1. A variety of methods are used to address this problem, the simplest being a cutoff radius similar to that used for the van der Waals terms. However, this introduces a sharp discontinuity between atoms inside and atoms outside the radius. Switching or scaling functions that modulate the apparent electrostatic energy are somewhat more accurate methods that multiply the calculated energy by a smoothly varying scaling factor from 0 to 1 at the outer and inner cutoff radii. Other more sophisticated but computationally intensive methods are particle mesh Ewald (PME) and the multipole algorithm. In addition to the functional form of each energy term, a useful energy function must be assigned parameters for force constants, van der Waals multipliers, and other constant terms. These terms, together with the equilibrium bond, angle, and dihedral values, partial charge values, atomic masses and radii, and energy function definitions, are collectively termed a force field. Parameterization is typically done through agreement with experimental values and theoretical calculations results. Norman L. Allinger's force field in the last MM4 version calculate for hydrocarbons heats of formation with a RMS error of 0.35 kcal/mol, vibrational spectra with a RMS error of 24 cm−1, rotational barriers with a RMS error of 2.2°, bond lengths within 0.004 Å and angles within 1°. Later MM4 versions cover also compounds with heteroatoms such as aliphatic amines. Each force field is parameterized to be internally consistent, but the parameters are generally not transferable from one force field to another. Areas of application The main use of molecular mechanics is in the field of molecular dynamics. This uses the force field to calculate the forces acting on each particle and a suitable integrator to model the dynamics of the particles and predict trajectories. Given enough sampling and subject to the ergodic hypothesis, molecular dynamics trajectories can be used to estimate thermodynamic parameters of a system or probe kinetic properties, such as reaction rates and mechanisms. Molecular mechanics is also used within QM/MM, which allows study of proteins and enzyme kinetics. The system is divided into two regions—one of which is treated with quantum mechanics (QM) allowing breaking and formation of bonds and the rest of the protein is modeled using molecular mechanics (MM). MM alone does not allow the study of mechanisms of enzymes, which QM allows. QM also produces more exact energy calculation of the system although it is much more computationally expensive. Another application of molecular mechanics is energy minimization, whereby the force field is used as an optimization criterion. This method uses an appropriate algorithm (e.g. steepest descent) to find the molecular structure of a local energy minimum. These minima correspond to stable conformers of the molecule (in the chosen force field) and molecular motion can be modelled as vibrations around and interconversions between these stable conformers. It is thus common to find local energy minimization methods combined with global energy optimization, to find the global energy minimum (and other low energy states). At finite temperature, the molecule spends most of its time in these low-lying states, which thus dominate the molecular properties. Global optimization can be accomplished using simulated annealing, the Metropolis algorithm and other Monte Carlo methods, or using different deterministic methods of discrete or continuous optimization. While the force field represents only the enthalpic component of free energy (and only this component is included during energy minimization), it is possible to include the entropic component through the use of additional methods, such as normal mode analysis. Molecular mechanics potential energy functions have been used to calculate binding constants, protein folding kinetics, protonation equilibria, active site coordinates, and to design binding sites. Environment and solvation In molecular mechanics, several ways exist to define the environment surrounding a molecule or molecules of interest. A system can be simulated in vacuum (termed a gas-phase simulation) with no surrounding environment, but this is usually undesirable because it introduces artifacts in the molecular geometry, especially in charged molecules. Surface charges that would ordinarily interact with solvent molecules instead interact with each other, producing molecular conformations that are unlikely to be present in any other environment. The most accurate way to solvate a system is to place explicit water molecules in the simulation box with the molecules of interest and treat the water molecules as interacting particles like those in the other molecule(s). A variety of water models exist with increasing levels of complexity, representing water as a simple hard sphere (a united-atom model), as three separate particles with fixed bond angle, or even as four or five separate interaction centers to account for unpaired electrons on the oxygen atom. As water models grow more complex, related simulations grow more computationally intensive. A compromise method has been found in implicit solvation, which replaces the explicitly represented water molecules with a mathematical expression that reproduces the average behavior of water molecules (or other solvents such as lipids). This method is useful to prevent artifacts that arise from vacuum simulations and reproduces bulk solvent properties well, but cannot reproduce situations in which individual water molecules create specific interactions with a solute that are not well captured by the solvent model, such as water molecules that are part of the hydrogen bond network within a protein. Software packages This is a limited list; many more packages are available. See also References Literature External links Molecular dynamics simulation methods revised Molecular mechanics - it is simple Molecular physics Computational chemistry Intermolecular forces Molecular modelling
0.791133
0.975714
0.77192
Functional programming
In computer science, functional programming is a programming paradigm where programs are constructed by applying and composing functions. It is a declarative programming paradigm in which function definitions are trees of expressions that map values to other values, rather than a sequence of imperative statements which update the running state of the program. In functional programming, functions are treated as first-class citizens, meaning that they can be bound to names (including local identifiers), passed as arguments, and returned from other functions, just as any other data type can. This allows programs to be written in a declarative and composable style, where small functions are combined in a modular manner. Functional programming is sometimes treated as synonymous with purely functional programming, a subset of functional programming which treats all functions as deterministic mathematical functions, or pure functions. When a pure function is called with some given arguments, it will always return the same result, and cannot be affected by any mutable state or other side effects. This is in contrast with impure procedures, common in imperative programming, which can have side effects (such as modifying the program's state or taking input from a user). Proponents of purely functional programming claim that by restricting side effects, programs can have fewer bugs, be easier to debug and test, and be more suited to formal verification. Functional programming has its roots in academia, evolving from the lambda calculus, a formal system of computation based only on functions. Functional programming has historically been less popular than imperative programming, but many functional languages are seeing use today in industry and education, including Common Lisp, Scheme, Clojure, Wolfram Language, Racket, Erlang, Elixir, OCaml, Haskell, and F#. Lean is a functional programming language commonly used for verifying mathematical theorems. Functional programming is also key to some languages that have found success in specific domains, like JavaScript in the Web, R in statistics, J, K and Q in financial analysis, and XQuery/XSLT for XML. Domain-specific declarative languages like SQL and Lex/Yacc use some elements of functional programming, such as not allowing mutable values. In addition, many other programming languages support programming in a functional style or have implemented features from functional programming, such as C++11, C#, Kotlin, Perl, PHP, Python, Go, Rust, Raku, Scala, and Java (since Java 8). History The lambda calculus, developed in the 1930s by Alonzo Church, is a formal system of computation built from function application. In 1937 Alan Turing proved that the lambda calculus and Turing machines are equivalent models of computation, showing that the lambda calculus is Turing complete. Lambda calculus forms the basis of all functional programming languages. An equivalent theoretical formulation, combinatory logic, was developed by Moses Schönfinkel and Haskell Curry in the 1920s and 1930s. Church later developed a weaker system, the simply-typed lambda calculus, which extended the lambda calculus by assigning a data type to all terms. This forms the basis for statically typed functional programming. The first high-level functional programming language, Lisp, was developed in the late 1950s for the IBM 700/7000 series of scientific computers by John McCarthy while at Massachusetts Institute of Technology (MIT). Lisp functions were defined using Church's lambda notation, extended with a label construct to allow recursive functions. Lisp first introduced many paradigmatic features of functional programming, though early Lisps were multi-paradigm languages, and incorporated support for numerous programming styles as new paradigms evolved. Later dialects, such as Scheme and Clojure, and offshoots such as Dylan and Julia, sought to simplify and rationalise Lisp around a cleanly functional core, while Common Lisp was designed to preserve and update the paradigmatic features of the numerous older dialects it replaced. Information Processing Language (IPL), 1956, is sometimes cited as the first computer-based functional programming language. It is an assembly-style language for manipulating lists of symbols. It does have a notion of generator, which amounts to a function that accepts a function as an argument, and, since it is an assembly-level language, code can be data, so IPL can be regarded as having higher-order functions. However, it relies heavily on the mutating list structure and similar imperative features. Kenneth E. Iverson developed APL in the early 1960s, described in his 1962 book A Programming Language. APL was the primary influence on John Backus's FP. In the early 1990s, Iverson and Roger Hui created J. In the mid-1990s, Arthur Whitney, who had previously worked with Iverson, created K, which is used commercially in financial industries along with its descendant Q. In the mid-1960s, Peter Landin invented SECD machine, the first abstract machine for a functional programming language, described a correspondence between ALGOL 60 and the lambda calculus, and proposed the ISWIM programming language. John Backus presented FP in his 1977 Turing Award lecture "Can Programming Be Liberated From the von Neumann Style? A Functional Style and its Algebra of Programs". He defines functional programs as being built up in a hierarchical way by means of "combining forms" that allow an "algebra of programs"; in modern language, this means that functional programs follow the principle of compositionality. Backus's paper popularized research into functional programming, though it emphasized function-level programming rather than the lambda-calculus style now associated with functional programming. The 1973 language ML was created by Robin Milner at the University of Edinburgh, and David Turner developed the language SASL at the University of St Andrews. Also in Edinburgh in the 1970s, Burstall and Darlington developed the functional language NPL. NPL was based on Kleene Recursion Equations and was first introduced in their work on program transformation. Burstall, MacQueen and Sannella then incorporated the polymorphic type checking from ML to produce the language Hope. ML eventually developed into several dialects, the most common of which are now OCaml and Standard ML. In the 1970s, Guy L. Steele and Gerald Jay Sussman developed Scheme, as described in the Lambda Papers and the 1985 textbook Structure and Interpretation of Computer Programs. Scheme was the first dialect of lisp to use lexical scoping and to require tail-call optimization, features that encourage functional programming. In the 1980s, Per Martin-Löf developed intuitionistic type theory (also called constructive type theory), which associated functional programs with constructive proofs expressed as dependent types. This led to new approaches to interactive theorem proving and has influenced the development of subsequent functional programming languages. The lazy functional language, Miranda, developed by David Turner, initially appeared in 1985 and had a strong influence on Haskell. With Miranda being proprietary, Haskell began with a consensus in 1987 to form an open standard for functional programming research; implementation releases have been ongoing as of 1990. More recently it has found use in niches such as parametric CAD in the OpenSCAD language built on the CGAL framework, although its restriction on reassigning values (all values are treated as constants) has led to confusion among users who are unfamiliar with functional programming as a concept. Functional programming continues to be used in commercial settings. Concepts A number of concepts and paradigms are specific to functional programming, and generally foreign to imperative programming (including object-oriented programming). However, programming languages often cater to several programming paradigms, so programmers using "mostly imperative" languages may have utilized some of these concepts. First-class and higher-order functions Higher-order functions are functions that can either take other functions as arguments or return them as results. In calculus, an example of a higher-order function is the differential operator , which returns the derivative of a function . Higher-order functions are closely related to first-class functions in that higher-order functions and first-class functions both allow functions as arguments and results of other functions. The distinction between the two is subtle: "higher-order" describes a mathematical concept of functions that operate on other functions, while "first-class" is a computer science term for programming language entities that have no restriction on their use (thus first-class functions can appear anywhere in the program that other first-class entities like numbers can, including as arguments to other functions and as their return values). Higher-order functions enable partial application or currying, a technique that applies a function to its arguments one at a time, with each application returning a new function that accepts the next argument. This lets a programmer succinctly express, for example, the successor function as the addition operator partially applied to the natural number one. Pure functions Pure functions (or expressions) have no side effects (memory or I/O). This means that pure functions have several useful properties, many of which can be used to optimize the code: If the result of a pure expression is not used, it can be removed without affecting other expressions. If a pure function is called with arguments that cause no side-effects, the result is constant with respect to that argument list (sometimes called referential transparency or idempotence), i.e., calling the pure function again with the same arguments returns the same result. (This can enable caching optimizations such as memoization.) If there is no data dependency between two pure expressions, their order can be reversed, or they can be performed in parallel and they cannot interfere with one another (in other terms, the evaluation of any pure expression is thread-safe). If the entire language does not allow side-effects, then any evaluation strategy can be used; this gives the compiler freedom to reorder or combine the evaluation of expressions in a program (for example, using deforestation). While most compilers for imperative programming languages detect pure functions and perform common-subexpression elimination for pure function calls, they cannot always do this for pre-compiled libraries, which generally do not expose this information, thus preventing optimizations that involve those external functions. Some compilers, such as gcc, add extra keywords for a programmer to explicitly mark external functions as pure, to enable such optimizations. Fortran 95 also lets functions be designated pure. C++11 added constexpr keyword with similar semantics. Recursion Iteration (looping) in functional languages is usually accomplished via recursion. Recursive functions invoke themselves, letting an operation be repeated until it reaches the base case. In general, recursion requires maintaining a stack, which consumes space in a linear amount to the depth of recursion. This could make recursion prohibitively expensive to use instead of imperative loops. However, a special form of recursion known as tail recursion can be recognized and optimized by a compiler into the same code used to implement iteration in imperative languages. Tail recursion optimization can be implemented by transforming the program into continuation passing style during compiling, among other approaches. The Scheme language standard requires implementations to support proper tail recursion, meaning they must allow an unbounded number of active tail calls. Proper tail recursion is not simply an optimization; it is a language feature that assures users that they can use recursion to express a loop and doing so would be safe-for-space. Moreover, contrary to its name, it accounts for all tail calls, not just tail recursion. While proper tail recursion is usually implemented by turning code into imperative loops, implementations might implement it in other ways. For example, Chicken intentionally maintains a stack and lets the stack overflow. However, when this happens, its garbage collector will claim space back, allowing an unbounded number of active tail calls even though it does not turn tail recursion into a loop. Common patterns of recursion can be abstracted away using higher-order functions, with catamorphisms and anamorphisms (or "folds" and "unfolds") being the most obvious examples. Such recursion schemes play a role analogous to built-in control structures such as loops in imperative languages. Most general purpose functional programming languages allow unrestricted recursion and are Turing complete, which makes the halting problem undecidable, can cause unsoundness of equational reasoning, and generally requires the introduction of inconsistency into the logic expressed by the language's type system. Some special purpose languages such as Coq allow only well-founded recursion and are strongly normalizing (nonterminating computations can be expressed only with infinite streams of values called codata). As a consequence, these languages fail to be Turing complete and expressing certain functions in them is impossible, but they can still express a wide class of interesting computations while avoiding the problems introduced by unrestricted recursion. Functional programming limited to well-founded recursion with a few other constraints is called total functional programming. Strict versus non-strict evaluation Functional languages can be categorized by whether they use strict (eager) or non-strict (lazy) evaluation, concepts that refer to how function arguments are processed when an expression is being evaluated. The technical difference is in the denotational semantics of expressions containing failing or divergent computations. Under strict evaluation, the evaluation of any term containing a failing subterm fails. For example, the expression: print length([2+1, 3*2, 1/0, 5-4]) fails under strict evaluation because of the division by zero in the third element of the list. Under lazy evaluation, the length function returns the value 4 (i.e., the number of items in the list), since evaluating it does not attempt to evaluate the terms making up the list. In brief, strict evaluation always fully evaluates function arguments before invoking the function. Lazy evaluation does not evaluate function arguments unless their values are required to evaluate the function call itself. The usual implementation strategy for lazy evaluation in functional languages is graph reduction. Lazy evaluation is used by default in several pure functional languages, including Miranda, Clean, and Haskell. argues for lazy evaluation as a mechanism for improving program modularity through separation of concerns, by easing independent implementation of producers and consumers of data streams. Launchbury 1993 describes some difficulties that lazy evaluation introduces, particularly in analyzing a program's storage requirements, and proposes an operational semantics to aid in such analysis. Harper 2009 proposes including both strict and lazy evaluation in the same language, using the language's type system to distinguish them. Type systems Especially since the development of Hindley–Milner type inference in the 1970s, functional programming languages have tended to use typed lambda calculus, rejecting all invalid programs at compilation time and risking false positive errors, as opposed to the untyped lambda calculus, that accepts all valid programs at compilation time and risks false negative errors, used in Lisp and its variants (such as Scheme), as they reject all invalid programs at runtime when the information is enough to not reject valid programs. The use of algebraic data types makes manipulation of complex data structures convenient; the presence of strong compile-time type checking makes programs more reliable in absence of other reliability techniques like test-driven development, while type inference frees the programmer from the need to manually declare types to the compiler in most cases. Some research-oriented functional languages such as Coq, Agda, Cayenne, and Epigram are based on intuitionistic type theory, which lets types depend on terms. Such types are called dependent types. These type systems do not have decidable type inference and are difficult to understand and program with. But dependent types can express arbitrary propositions in higher-order logic. Through the Curry–Howard isomorphism, then, well-typed programs in these languages become a means of writing formal mathematical proofs from which a compiler can generate certified code. While these languages are mainly of interest in academic research (including in formalized mathematics), they have begun to be used in engineering as well. Compcert is a compiler for a subset of the language C that is written in Coq and formally verified. A limited form of dependent types called generalized algebraic data types (GADT's) can be implemented in a way that provides some of the benefits of dependently typed programming while avoiding most of its inconvenience. GADT's are available in the Glasgow Haskell Compiler, in OCaml and in Scala, and have been proposed as additions to other languages including Java and C#. Referential transparency Functional programs do not have assignment statements, that is, the value of a variable in a functional program never changes once defined. This eliminates any chances of side effects because any variable can be replaced with its actual value at any point of execution. So, functional programs are referentially transparent. Consider C assignment statement x=x * 10, this changes the value assigned to the variable x. Let us say that the initial value of x was 1, then two consecutive evaluations of the variable x yields 10 and 100 respectively. Clearly, replacing x=x * 10 with either 10 or 100 gives a program a different meaning, and so the expression is not referentially transparent. In fact, assignment statements are never referentially transparent. Now, consider another function such as int plusone(int x) {return x+1;} is transparent, as it does not implicitly change the input x and thus has no such side effects. Functional programs exclusively use this type of function and are therefore referentially transparent. Data structures Purely functional data structures are often represented in a different way to their imperative counterparts. For example, the array with constant access and update times is a basic component of most imperative languages, and many imperative data-structures, such as the hash table and binary heap, are based on arrays. Arrays can be replaced by maps or random access lists, which admit purely functional implementation, but have logarithmic access and update times. Purely functional data structures have persistence, a property of keeping previous versions of the data structure unmodified. In Clojure, persistent data structures are used as functional alternatives to their imperative counterparts. Persistent vectors, for example, use trees for partial updating. Calling the insert method will result in some but not all nodes being created. Comparison to imperative programming Functional programming is very different from imperative programming. The most significant differences stem from the fact that functional programming avoids side effects, which are used in imperative programming to implement state and I/O. Pure functional programming completely prevents side-effects and provides referential transparency. Higher-order functions are rarely used in older imperative programming. A traditional imperative program might use a loop to traverse and modify a list. A functional program, on the other hand, would probably use a higher-order "map" function that takes a function and a list, generating and returning a new list by applying the function to each list item. Imperative vs. functional programming The following two examples (written in JavaScript) achieve the same effect: they multiply all even numbers in an array by 10 and add them all, storing the final sum in the variable "result". Traditional imperative loop: const numList = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]; let result = 0; for (let i = 0; i < numList.length; i++) { if (numList[i] % 2 === 0) { result += numList[i] * 10; } } Functional programming with higher-order functions: const result = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] .filter(n => n % 2 === 0) .map(a => a * 10) .reduce((a, b) => a + b, 0);Sometimes the abstractions offered by functional programming might lead to development of more robust code that avoids certain issues that might arise when building upon large amount of complex, imperative code, such as off-by-one errors (see Greenspun's tenth rule). Simulating state There are tasks (for example, maintaining a bank account balance) that often seem most naturally implemented with state. Pure functional programming performs these tasks, and I/O tasks such as accepting user input and printing to the screen, in a different way. The pure functional programming language Haskell implements them using monads, derived from category theory. Monads offer a way to abstract certain types of computational patterns, including (but not limited to) modeling of computations with mutable state (and other side effects such as I/O) in an imperative manner without losing purity. While existing monads may be easy to apply in a program, given appropriate templates and examples, many students find them difficult to understand conceptually, e.g., when asked to define new monads (which is sometimes needed for certain types of libraries). Functional languages also simulate states by passing around immutable states. This can be done by making a function accept the state as one of its parameters, and return a new state together with the result, leaving the old state unchanged. Impure functional languages usually include a more direct method of managing mutable state. Clojure, for example, uses managed references that can be updated by applying pure functions to the current state. This kind of approach enables mutability while still promoting the use of pure functions as the preferred way to express computations. Alternative methods such as Hoare logic and uniqueness have been developed to track side effects in programs. Some modern research languages use effect systems to make the presence of side effects explicit. Efficiency issues Functional programming languages are typically less efficient in their use of CPU and memory than imperative languages such as C and Pascal. This is related to the fact that some mutable data structures like arrays have a very straightforward implementation using present hardware. Flat arrays may be accessed very efficiently with deeply pipelined CPUs, prefetched efficiently through caches (with no complex pointer chasing), or handled with SIMD instructions. It is also not easy to create their equally efficient general-purpose immutable counterparts. For purely functional languages, the worst-case slowdown is logarithmic in the number of memory cells used, because mutable memory can be represented by a purely functional data structure with logarithmic access time (such as a balanced tree). However, such slowdowns are not universal. For programs that perform intensive numerical computations, functional languages such as OCaml and Clean are only slightly slower than C according to The Computer Language Benchmarks Game. For programs that handle large matrices and multidimensional databases, array functional languages (such as J and K) were designed with speed optimizations. Immutability of data can in many cases lead to execution efficiency by allowing the compiler to make assumptions that are unsafe in an imperative language, thus increasing opportunities for inline expansion. Even if the involved copying that may seem implicit when dealing with persistent immutable data structures might seem computationally costly, some functional programming languages, like Clojure solve this issue by implementing mechanisms for safe memory sharing between formally immutable data. Rust distinguishes itself by its approach to data immutability which involves immutable references and a concept called lifetimes. Immutable data with separation of identity and state and shared-nothing schemes can also potentially be more well-suited for concurrent and parallel programming by the virtue of reducing or eliminating the risk of certain concurrency hazards, since concurrent operations are usually atomic and this allows eliminating the need for locks. This is how for example java.util.concurrent classes are implemented, where some of them are immutable variants of the corresponding classes that are not suitable for concurrent use. Functional programming languages often have a concurrency model that instead of shared state and synchronization, leverages message passing mechanisms (such as the actor model, where each actor is a container for state, behavior, child actors and a message queue). This approach is common in Erlang/Elixir or Akka. Lazy evaluation may also speed up the program, even asymptotically, whereas it may slow it down at most by a constant factor (however, it may introduce memory leaks if used improperly). Launchbury 1993 discusses theoretical issues related to memory leaks from lazy evaluation, and O'Sullivan et al. 2008 give some practical advice for analyzing and fixing them. However, the most general implementations of lazy evaluation making extensive use of dereferenced code and data perform poorly on modern processors with deep pipelines and multi-level caches (where a cache miss may cost hundreds of cycles) . Abstraction cost Some functional programming languages might not optimize abstractions such as higher order functions like "map" or "filter" as efficiently as the underlying imperative operations. Consider, as an example, the following two ways to check if 5 is an even number in Clojure: (even? 5) (.equals (mod 5 2) 0) When benchmarked using the Criterium tool on a Ryzen 7900X GNU/Linux PC in a Leiningen REPL 2.11.2, running on Java VM version 22 and Clojure version 1.11.1, the first implementation, which is implemented as: (defn even? "Returns true if n is even, throws an exception if n is not an integer" {:added "1.0" :static true} [n] (if (integer? n) (zero? (bit-and (clojure.lang.RT/uncheckedLongCast n) 1)) (throw (IllegalArgumentException. (str "Argument must be an integer: " n))))) has the mean execution time of 4.76 ms, while the second one, in which .equals is a direct invocation of the underlying Java method, has a mean execution time of 2.8 μs – roughly 1700 times faster. Part of that can be attributed to the type checking and exception handling involved in the implementation of even?, so let's take for instance the lo library for Go, which implements various higher-order functions common in functional programming languages using generics. In a benchmark provided by the library's author, calling map is 4% slower than an equivalent for loop and has the same allocation profile, which can be attributed to various compiler optimizations, such as inlining. One distinguishing feature of Rust are zero-cost abstractions. This means that using them imposes no additional runtime overhead. This is achieved thanks to the compiler using loop unrolling, where each iteration of a loop, be it imperative or using iterators, is converted into a standalone Assembly instruction, without the overhead of the loop controlling code. If an iterative operation writes to an array, the resulting array's elements will be stored in specific CPU registers, allowing for constant-time access at runtime. Functional programming in non-functional languages It is possible to use a functional style of programming in languages that are not traditionally considered functional languages. For example, both D and Fortran 95 explicitly support pure functions. JavaScript, Lua, Python and Go had first class functions from their inception. Python had support for "lambda", "map", "reduce", and "filter" in 1994, as well as closures in Python 2.2, though Python 3 relegated "reduce" to the functools standard library module. First-class functions have been introduced into other mainstream languages such as PHP 5.3, Visual Basic 9, C# 3.0, C++11, and Kotlin. In PHP, anonymous classes, closures and lambdas are fully supported. Libraries and language extensions for immutable data structures are being developed to aid programming in the functional style. In Java, anonymous classes can sometimes be used to simulate closures; however, anonymous classes are not always proper replacements to closures because they have more limited capabilities. Java 8 supports lambda expressions as a replacement for some anonymous classes. In C#, anonymous classes are not necessary, because closures and lambdas are fully supported. Libraries and language extensions for immutable data structures are being developed to aid programming in the functional style in C#. Many object-oriented design patterns are expressible in functional programming terms: for example, the strategy pattern simply dictates use of a higher-order function, and the visitor pattern roughly corresponds to a catamorphism, or fold. Similarly, the idea of immutable data from functional programming is often included in imperative programming languages, for example the tuple in Python, which is an immutable array, and Object.freeze() in JavaScript. Comparison to logic programming Logic programming can be viewed as a generalisation of functional programming, in which functions are a special case of relations. For example, the function, mother(X) = Y, (every X has only one mother Y) can be represented by the relation mother(X, Y). Whereas functions have a strict input-output pattern of arguments, relations can be queried with any pattern of inputs and outputs. Consider the following logic program: mother(charles, elizabeth). mother(harry, diana). The program can be queried, like a functional program, to generate mothers from children: ?- mother(harry, X). X = diana. ?- mother(charles, X). X = elizabeth. But it can also be queried backwards, to generate children: ?- mother(X, elizabeth). X = charles. ?- mother(X, diana). X = harry. It can even be used to generate all instances of the mother relation: ?- mother(X, Y). X = charles, Y = elizabeth. X = harry, Y = diana. Compared with relational syntax, functional syntax is a more compact notation for nested functions. For example, the definition of maternal grandmother in functional syntax can be written in the nested form: maternal_grandmother(X) = mother(mother(X)). The same definition in relational notation needs to be written in the unnested form: maternal_grandmother(X, Y) :- mother(X, Z), mother(Z, Y). Here :- means if and , means and. However, the difference between the two representations is simply syntactic. In Ciao Prolog, relations can be nested, like functions in functional programming: grandparent(X) := parent(parent(X)). parent(X) := mother(X). parent(X) := father(X). mother(charles) := elizabeth. father(charles) := phillip. mother(harry) := diana. father(harry) := charles. ?- grandparent(X,Y). X = harry, Y = elizabeth. X = harry, Y = phillip. Ciao transforms the function-like notation into relational form and executes the resulting logic program using the standard Prolog execution strategy. Applications Text editors Emacs, a highly extensible text editor family uses its own Lisp dialect for writing plugins. The original author of the most popular Emacs implementation, GNU Emacs and Emacs Lisp, Richard Stallman considers Lisp one of his favorite programming languages. Helix, since version 24.03 supports previewing AST as S-expressions, which are also the core feature of the Lisp programming language family. Spreadsheets Spreadsheets can be considered a form of pure, zeroth-order, strict-evaluation functional programming system. However, spreadsheets generally lack higher-order functions as well as code reuse, and in some implementations, also lack recursion. Several extensions have been developed for spreadsheet programs to enable higher-order and reusable functions, but so far remain primarily academic in nature. Academia Functional programming is an active area of research in the field of programming language theory. There are several peer-reviewed publication venues focusing on functional programming, including the International Conference on Functional Programming, the Journal of Functional Programming, and the Symposium on Trends in Functional Programming. Industry Functional programming has been employed in a wide range of industrial applications. For example, Erlang, which was developed by the Swedish company Ericsson in the late 1980s, was originally used to implement fault-tolerant telecommunications systems, but has since become popular for building a range of applications at companies such as Nortel, Facebook, Électricité de France and WhatsApp. Scheme, a dialect of Lisp, was used as the basis for several applications on early Apple Macintosh computers and has been applied to problems such as training-simulation software and telescope control. OCaml, which was introduced in the mid-1990s, has seen commercial use in areas such as financial analysis, driver verification, industrial robot programming and static analysis of embedded software. Haskell, though initially intended as a research language, has also been applied in areas such as aerospace systems, hardware design and web programming. Other functional programming languages that have seen use in industry include Scala, F#, Wolfram Language, Lisp, Standard ML and Clojure. Scala has been widely used in Data science, while ClojureScript, Elm or PureScript are some of the functional frontend programming languages used in production. Elixir's Phoenix framework is also used by some relatively popular commercial projects, such as Font Awesome or Allegro (one of the biggest e-commerce platforms in Poland)'s classified ads platform Allegro Lokalnie. Functional "platforms" have been popular in finance for risk analytics (particularly with large investment banks). Risk factors are coded as functions that form interdependent graphs (categories) to measure correlations in market shifts, similar in manner to Gröbner basis optimizations but also for regulatory frameworks such as Comprehensive Capital Analysis and Review. Given the use of OCaml and Caml variations in finance, these systems are sometimes considered related to a categorical abstract machine. Functional programming is heavily influenced by category theory. Education Many universities teach functional programming. Some treat it as an introductory programming concept while others first teach imperative programming methods. Outside of computer science, functional programming is used to teach problem-solving, algebraic and geometric concepts. It has also been used to teach classical mechanics, as in the book Structure and Interpretation of Classical Mechanics. In particular, Scheme has been a relatively popular choice for teaching programming for years. See also Eager evaluation Functional reactive programming Inductive functional programming List of functional programming languages List of functional programming topics Nested function Purely functional programming Notes and references Further reading Cousineau, Guy and Michel Mauny. The Functional Approach to Programming. Cambridge, UK: Cambridge University Press, 1998. Curry, Haskell Brooks and Feys, Robert and Craig, William. Combinatory Logic. Volume I. North-Holland Publishing Company, Amsterdam, 1958. Dominus, Mark Jason. Higher-Order Perl. Morgan Kaufmann. 2005. Graham, Paul. ANSI Common LISP. Englewood Cliffs, New Jersey: Prentice Hall, 1996. MacLennan, Bruce J. Functional Programming: Practice and Theory. Addison-Wesley, 1990. Pratt, Terrence W. and Marvin Victor Zelkowitz. Programming Languages: Design and Implementation. 3rd ed. Englewood Cliffs, New Jersey: Prentice Hall, 1996. Salus, Peter H. Functional and Logic Programming Languages. Vol. 4 of Handbook of Programming Languages. Indianapolis, Indiana: Macmillan Technical Publishing, 1998. Thompson, Simon. Haskell: The Craft of Functional Programming. Harlow, England: Addison-Wesley Longman Limited, 1996. External links An introduction Functional programming in Python (by David Mertz): part 1, part 2, part 3 Programming paradigms Articles with example C code
0.772846
0.998737
0.771871
Amino acid replacement
Amino acid replacement is a change from one amino acid to a different amino acid in a protein due to point mutation in the corresponding DNA sequence. It is caused by nonsynonymous missense mutation which changes the codon sequence to code other amino acid instead of the original. Conservative and radical replacements Not all amino acid replacements have the same effect on function or structure of protein. The magnitude of this process may vary depending on how similar or dissimilar the replaced amino acids are, as well as on their position in the sequence or the structure. Similarity between amino acids can be calculated based on substitution matrices, physico-chemical distance, or simple properties such as amino acid size or charge (see also amino acid chemical properties). Usually amino acids are thus classified into two types: Conservative replacement - an amino acid is exchanged into another that has similar properties. This type of replacement is expected to rarely result in dysfunction in the corresponding protein . Radical replacement - an amino acid is exchanged into another with different properties. This can lead to changes in protein structure or function, which can cause potentially lead to changes in phenotype, sometimes pathogenic. A well known example in humans is sickle cell anemia, due to a mutation in beta globin where at position 6 glutamic acid (negatively charged) is exchanged with valine (not charged). Physicochemical distances Physicochemical distance is a measure that assesses the difference between replaced amino acids. The value of distance is based on properties of amino acids. There are 134 physicochemical properties that can be used to estimate similarity between amino acids. Each physicochemical distance is based on different composition of properties. Grantham's distance Grantham's distance depends on three properties: composition, polarity and molecular volume. Distance difference D for each pair of amino acid i and j is calculated as: where c = composition, p = polarity, and v = molecular volume; and are constants of squares of the inverses of the mean distance for each property, respectively equal to 1.833, 0.1018, 0.000399. According to Grantham's distance, most similar amino acids are leucine and isoleucine and the most distant are cysteine and tryptophan. Sneath's index Sneath's index takes into account 134 categories of activity and structure. Dissimilarity index D is a percentage value of the sum of all properties not shared between two replaced amino acids. It is percentage value expressed by , where S is Similarity. Epstein's coefficient of difference Epstein's coefficient of difference is based on the differences in polarity and size between replaced pairs of amino acids. This index that distincts the direction of exchange between amino acids, described by 2 equations: when smaller hydrophobic residue is replaced by larger hydrophobic or polar residue when polar residue is exchanged or larger residue is replaced by smaller Miyata's distance Miyata's distance is based on 2 physicochemical properties: volume and polarity. Distance between amino acids ai and aj is calculated as where is value of polarity difference between replaced amino acids and and is difference for volume; and are standard deviations for and Experimental Exchangeability Experimental Exchangeability was devised by Yampolsky and Stoltzfus. It is the measure of the mean effect of exchanging one amino acid into a different amino acid. It is based on analysis of experimental studies where 9671 amino acids replacements from different proteins, were compared for effect on protein activity. Typical and idiosyncratic amino acids Amino acids can also be classified according to how many different amino acids they can be exchanged by through single nucleotide substitution. Typical amino acids - there are several other amino acids which they can change into through single nucleotide substitution. Typical amino acids and their alternatives usually have similar physicochemical properties. Leucine is an example of a typical amino acid. Idiosyncratic amino acids - there are few similar amino acids that they can mutate to through single nucleotide substitution. In this case most amino acid replacements will be disruptive for protein function. Tryptophan is an example of an idiosyncratic amino acid. Tendency to undergo amino acid replacement Some amino acids are more likely to be replaced. One of the factors that influences this tendency is physicochemical distance. Example of a measure of amino acid can be Graur's Stability Index. The assumption of this measure is that the amino acid replacement rate and protein's evolution is dependent on the amino acid composition of protein. Stability index S of an amino acid is calculated based on physicochemical distances of this amino acid and its alternatives than can mutate through single nucleotide substitution and probabilities to replace into these amino acids. Based on Grantham's distance the most immutable amino acid is cysteine, and the most prone to undergo exchange is methionine. Patterns of amino acid replacement Evolution of proteins is slower than DNA since only nonsynonymous mutations in DNA can result in amino acid replacements. Most mutations are neutral to maintain protein function and structure. Therefore, the more similar amino acids are, the more probable that they will be replaced. Conservative replacements are more common than radical replacements, since they can result in less important phenotypic changes. On the other hand, beneficial mutations, enhancing protein functions are most likely to be radical replacements. Also, the physicochemical distances, which are based on amino acids properties, are negatively correlated with probability of amino acids substitutions. Smaller distance between amino acids indicates that they are more likely to undergo replacement. References Amino acids Biochemistry
0.788122
0.979355
0.771851
Bioanalysis
Bioanalysis is a sub-discipline of analytical chemistry covering the quantitative measurement of xenobiotics (drugs and their metabolites, and biological molecules in unnatural locations or concentrations) and biotics (macromolecules, proteins, DNA, large molecule drugs, metabolites) in biological systems. Modern bioanalytical chemistry Many scientific endeavors are dependent upon accurate quantification of drugs and endogenous substances in biological samples; the focus of bioanalysis in the pharmaceutical industry is to provide a quantitative measure of the active drug and/or its metabolite(s) for the purpose of pharmacokinetics, toxicokinetics, bioequivalence and exposure–response (pharmacokinetics/pharmacodynamics studies). Bioanalysis also applies to drugs used for illicit purposes, forensic investigations, anti-doping testing in sports, and environmental concerns. Bioanalysis was traditionally thought of in terms of measuring small molecule drugs. However, the past twenty years has seen an increase in biopharmaceuticals (e.g. proteins and peptides), which have been developed to address many of the same diseases as small molecules. These larger biomolecules have presented their own unique challenges to quantification. History The first studies measuring drugs in biological fluids were carried out to determine possible overdosing as part of the new science of forensic medicine/toxicology. Initially, nonspecific assays were applied to measuring drugs in biological fluids. These were unable to discriminate between the drug and its metabolites; for example, aspirin and sulfonamides (developed in the 1930s) were quantified by the use of colorimetric assays. Antibiotics were quantified by their ability to inhibit bacterial growth. The 1930s also saw the rise of pharmacokinetics, and as such the desire for more specific assays. Modern drugs are more potent, which has required more sensitive bioanalytical assays to accurately and reliably determine these drugs at lower concentrations. This has driven improvements in technology and analytical methods. Bioanalytical techniques Some techniques commonly used in bioanalytical studies include: Hyphenated techniques LC–MS (liquid chromatography–mass spectrometry) GC–MS (gas chromatography–mass spectrometry) LC–DAD (liquid chromatography–diode array detection) CE–MS (capillary electrophoresis–mass spectrometry) Chromatographic methods HPLC (high performance liquid chromatography) GC (gas chromatography) UPLC (ultra performance liquid chromatography) Supercritical fluid chromatography Electrophoresis Ligand binding assays Dual polarisation interferometry ELISA (Enzyme-linked immunosorbent assay) MIA (magnetic immunoassay) RIA (radioimmunoassay) Mass spectrometry Nuclear magnetic resonance The most frequently used techniques are: liquid chromatography coupled with tandem mass spectrometry (LC–MS/MS) for 'small' molecules and enzyme-linked immunosorbent assay (ELISA) for macromolecules. Sample preparation and extraction The bioanalyst deals with complex biological samples containing the analyte alongside a diverse range of chemicals that can have an adverse impact on the accurate and precise quantification of the analyte. As such, a wide range of techniques are applied to extract the analyte from its matrix. These include: Protein precipitation Liquid–liquid extraction Solid phase extraction Bioanalytical laboratories often deal with large numbers of samples, for example resulting from clinical trials. As such, automated sample preparation methods and liquid-handling robots are commonly employed to increase efficiency and reduce costs. Bioanalytical organisations There are several national and international bioanalytical organisations active throughout the world. Often they are part of a bigger organisation, e.g. Bioanalytical Focus Group and Ligand Binding Assay Bioanalytical Focus Group, which are both within the American Association of Pharmaceutical Scientists (AAPS) and FABIAN, a working group of the Analytical Chemistry Section of the Royal Netherlands Chemical Society. The European Bioanalysis Forum (EBF), on the other hand, is independent of any larger society or association. References Analytical chemistry Pharmacokinetics Toxicology
0.79925
0.965713
0.771846
Biometrics
Biometrics are body measurements and calculations related to human characteristics and features. Biometric authentication (or realistic authentication) is used in computer science as a form of identification and access control. It is also used to identify individuals in groups that are under surveillance. Biometric identifiers are the distinctive, measurable characteristics used to label and describe individuals. Biometric identifiers are often categorized as physiological characteristics which are related to the shape of the body. Examples include, but are not limited to fingerprint, palm veins, face recognition, DNA, palm print, hand geometry, iris recognition, retina, odor/scent, voice, shape of ears and gait. Behavioral characteristics are related to the pattern of behavior of a person, including but not limited to mouse movement, typing rhythm, gait, signature, voice, and behavioral profiling. Some researchers have coined the term behaviometrics (behavioral biometrics) to describe the latter class of biometrics. More traditional means of access control include token-based identification systems, such as a driver's license or passport, and knowledge-based identification systems, such as a password or personal identification number. Since biometric identifiers are unique to individuals, they are more reliable in verifying identity than token and knowledge-based methods; however, the collection of biometric identifiers raises privacy concerns. Biometric functionality Many different aspects of human physiology, chemistry or behavior can be used for biometric authentication. The selection of a particular biometric for use in a specific application involves a weighting of several factors. Jain et al. (1999) identified seven such factors to be used when assessing the suitability of any trait for use in biometric authentication. Biometric authentication is based upon biometric recognition which is an advanced method of recognising biological and behavioural characteristics of an Individual. Universality means that every person using a system should possess the trait. Uniqueness means the trait should be sufficiently different for individuals in the relevant population such that they can be distinguished from one another. Permanence relates to the manner in which a trait varies over time. More specifically, a trait with good permanence will be reasonably invariant over time with respect to the specific matching algorithm. Measurability (collectability) relates to the ease of acquisition or measurement of the trait. In addition, acquired data should be in a form that permits subsequent processing and extraction of the relevant feature sets. Performance relates to the accuracy, speed, and robustness of technology used (see performance section for more details). Acceptability relates to how well individuals in the relevant population accept the technology such that they are willing to have their biometric trait captured and assessed. Circumvention relates to the ease with which a trait might be imitated using an artifact or substitute. Proper biometric use is very application dependent. Certain biometrics will be better than others based on the required levels of convenience and security. No single biometric will meet all the requirements of every possible application. The block diagram illustrates the two basic modes of a biometric system. First, in verification (or authentication) mode the system performs a one-to-one comparison of a captured biometric with a specific template stored in a biometric database in order to verify the individual is the person they claim to be. Three steps are involved in the verification of a person. In the first step, reference models for all the users are generated and stored in the model database. In the second step, some samples are matched with reference models to generate the genuine and impostor scores and calculate the threshold. The third step is the testing step. This process may use a smart card, username, or ID number (e.g. PIN) to indicate which template should be used for comparison. Positive recognition is a common use of the verification mode, "where the aim is to prevent multiple people from using the same identity". Second, in identification mode the system performs a one-to-many comparison against a biometric database in an attempt to establish the identity of an unknown individual. The system will succeed in identifying the individual if the comparison of the biometric sample to a template in the database falls within a previously set threshold. Identification mode can be used either for positive recognition (so that the user does not have to provide any information about the template to be used) or for negative recognition of the person "where the system establishes whether the person is who she (implicitly or explicitly) denies to be". The latter function can only be achieved through biometrics since other methods of personal recognition, such as passwords, PINs, or keys, are ineffective. The first time an individual uses a biometric system is called enrollment. During enrollment, biometric information from an individual is captured and stored. In subsequent uses, biometric information is detected and compared with the information stored at the time of enrollment. Note that it is crucial that storage and retrieval of such systems themselves be secure if the biometric system is to be robust. The first block (sensor) is the interface between the real world and the system; it has to acquire all the necessary data. Most of the times it is an image acquisition system, but it can change according to the characteristics desired. The second block performs all the necessary pre-processing: it has to remove artifacts from the sensor, to enhance the input (e.g. removing background noise), to use some kind of normalization, etc. In the third block, necessary features are extracted. This step is an important step as the correct features need to be extracted in an optimal way. A vector of numbers or an image with particular properties is used to create a template. A template is a synthesis of the relevant characteristics extracted from the source. Elements of the biometric measurement that are not used in the comparison algorithm are discarded in the template to reduce the file size and to protect the identity of the enrollee. However, depending on the scope of the biometric system, original biometric image sources may be retained, such as the PIV-cards used in the Federal Information Processing Standard Personal Identity Verification (PIV) of Federal Employees and Contractors (FIPS 201). During the enrollment phase, the template is simply stored somewhere (on a card or within a database or both). During the matching phase, the obtained template is passed to a matcher that compares it with other existing templates, estimating the distance between them using any algorithm (e.g. Hamming distance). The matching program will analyze the template with the input. This will then be output for a specified use or purpose (e.g. entrance in a restricted area), though it is a fear that the use of biometric data may face mission creep. Selection of biometrics in any practical application depending upon the characteristic measurements and user requirements. In selecting a particular biometric, factors to consider include, performance, social acceptability, ease of circumvention and/or spoofing, robustness, population coverage, size of equipment needed and identity theft deterrence. The selection of a biometric is based on user requirements and considers sensor and device availability, computational time and reliability, cost, sensor size, and power consumption. Multimodal biometric system Multimodal biometric systems use multiple sensors or biometrics to overcome the limitations of unimodal biometric systems. For instance iris recognition systems can be compromised by aging irises and electronic fingerprint recognition can be worsened by worn-out or cut fingerprints. While unimodal biometric systems are limited by the integrity of their identifier, it is unlikely that several unimodal systems will suffer from identical limitations. Multimodal biometric systems can obtain sets of information from the same marker (i.e., multiple images of an iris, or scans of the same finger) or information from different biometrics (requiring fingerprint scans and, using voice recognition, a spoken passcode). Multimodal biometric systems can fuse these unimodal systems sequentially, simultaneously, a combination thereof, or in series, which refer to sequential, parallel, hierarchical and serial integration modes, respectively. Fusion of the biometrics information can occur at different stages of a recognition system. In case of feature level fusion, the data itself or the features extracted from multiple biometrics are fused. Matching-score level fusion consolidates the scores generated by multiple classifiers pertaining to different modalities. Finally, in case of decision level fusion the final results of multiple classifiers are combined via techniques such as majority voting. Feature level fusion is believed to be more effective than the other levels of fusion because the feature set contains richer information about the input biometric data than the matching score or the output decision of a classifier. Therefore, fusion at the feature level is expected to provide better recognition results. Furthermore, the evolving biometric market trends underscore the importance of technological integration, showcasing a shift towards combining multiple biometric modalities for enhanced security and identity verification, aligning with the advancements in multimodal biometric systems. Spoof attacks consist in submitting fake biometric traits to biometric systems, and are a major threat that can curtail their security. Multi-modal biometric systems are commonly believed to be intrinsically more robust to spoof attacks, but recent studies have shown that they can be evaded by spoofing even a single biometric trait. One such proposed system of Multimodal Biometric Cryptosystem Involving the Face, Fingerprint, and Palm Vein by Prasanalakshmi The Cryptosystem Integration combines biometrics with cryptography, where the palm vein acts as a cryptographic key, offering a high level of security since palm veins are unique and difficult to forge. The Fingerprint Involves minutiae extraction (terminations and bifurcations) and matching techniques. Steps include image enhancement, binarization, ROI extraction, and minutiae thinning. The Face system uses class-based scatter matrices to calculate features for recognition, and the Palm Vein acts as an unbreakable cryptographic key, ensuring only the correct user can access the system. The cancelable Biometrics concept allows biometric traits to be altered slightly to ensure privacy and avoid theft. If compromised, new variations of biometric data can be issued. The Encryption fingerprint template is encrypted using the palm vein key via XOR operations. This encrypted Fingerprint is hidden within the face image using steganographic techniques. Enrollment and Verification for the Biometric data (Fingerprint, palm vein, face) are captured, encrypted, and embedded into a face image. The system extracts the biometric data and compares it with stored values for Verification. The system was tested with fingerprint databases, achieving 75% verification accuracy at an equal error rate of 25% and processing time approximately 50 seconds for enrollment and 22 seconds for Verification. High security due to palm vein encryption, effective against biometric spoofing, and the multimodal approach ensures reliability if one biometric fails. Potential for integration with smart cards or on-card systems, enhancing security in personal identification systems. Performance The discriminating powers of all biometric technologies depend on the amount of entropy they are able to encode and use in matching. The following are used as performance metrics for biometric systems: False match rate (FMR, also called FAR = False Accept Rate): the probability that the system incorrectly matches the input pattern to a non-matching template in the database. It measures the percent of invalid inputs that are incorrectly accepted. In case of similarity scale, if the person is an imposter in reality, but the matching score is higher than the threshold, then he is treated as genuine. This increases the FMR, which thus also depends upon the threshold value. False non-match rate (FNMR, also called FRR = False Reject Rate): the probability that the system fails to detect a match between the input pattern and a matching template in the database. It measures the percent of valid inputs that are incorrectly rejected. Receiver operating characteristic or relative operating characteristic (ROC): The ROC plot is a visual characterization of the trade-off between the FMR and the FNMR. In general, the matching algorithm performs a decision based on a threshold that determines how close to a template the input needs to be for it to be considered a match. If the threshold is reduced, there will be fewer false non-matches but more false accepts. Conversely, a higher threshold will reduce the FMR but increase the FNMR. A common variation is the Detection error trade-off (DET), which is obtained using normal deviation scales on both axes. This more linear graph illuminates the differences for higher performances (rarer errors). Equal error rate or crossover error rate (EER or CER): the rate at which both acceptance and rejection errors are equal. The value of the EER can be easily obtained from the ROC curve. The EER is a quick way to compare the accuracy of devices with different ROC curves. In general, the device with the lowest EER is the most accurate. Failure to enroll rate (FTE or FER): the rate at which attempts to create a template from an input is unsuccessful. This is most commonly caused by low-quality inputs. Failure to capture rate (FTC): Within automatic systems, the probability that the system fails to detect a biometric input when presented correctly. Template capacity: the maximum number of sets of data that can be stored in the system. History An early cataloguing of fingerprints dates back to 1885 when Juan Vucetich started a collection of fingerprints of criminals in Argentina. Josh Ellenbogen and Nitzan Lebovic argued that Biometrics originated in the identification systems of criminal activity developed by Alphonse Bertillon (1853–1914) and by Francis Galton's theory of fingerprints and physiognomy. According to Lebovic, Galton's work "led to the application of mathematical models to fingerprints, phrenology, and facial characteristics", as part of "absolute identification" and "a key to both inclusion and exclusion" of populations. Accordingly, "the biometric system is the absolute political weapon of our era" and a form of "soft control". The theoretician David Lyon showed that during the past two decades biometric systems have penetrated the civilian market, and blurred the lines between governmental forms of control and private corporate control. Kelly A. Gates identified 9/11 as the turning point for the cultural language of our present: "in the language of cultural studies, the aftermath of 9/11 was a moment of articulation, where objects or events that have no necessary connection come together and a new discourse formation is established: automated facial recognition as a homeland security technology." Adaptive biometric systems Adaptive biometric systems aim to auto-update the templates or model to the intra-class variation of the operational data. The two-fold advantages of these systems are solving the problem of limited training data and tracking the temporal variations of the input data through adaptation. Recently, adaptive biometrics have received a significant attention from the research community. This research direction is expected to gain momentum because of their key promulgated advantages. First, with an adaptive biometric system, one no longer needs to collect a large number of biometric samples during the enrollment process. Second, it is no longer necessary to enroll again or retrain the system from scratch in order to cope with the changing environment. This convenience can significantly reduce the cost of maintaining a biometric system. Despite these advantages, there are several open issues involved with these systems. For mis-classification error (false acceptance) by the biometric system, cause adaptation using impostor sample. However, continuous research efforts are directed to resolve the open issues associated to the field of adaptive biometrics. More information about adaptive biometric systems can be found in the critical review by Rattani et al. Recent advances in emerging biometrics In recent times, biometrics based on brain (electroencephalogram) and heart (electrocardiogram) signals have emerged. An example is finger vein recognition, using pattern-recognition techniques, based on images of human vascular patterns. The advantage of this newer technology is that it is more fraud resistant compared to conventional biometrics like fingerprints. However, such technology is generally more cumbersome and still has issues such as lower accuracy and poor reproducibility over time. On the portability side of biometric products, more and more vendors are embracing significantly miniaturized biometric authentication systems (BAS) thereby driving elaborate cost savings, especially for large-scale deployments. Operator signatures An operator signature is a biometric mode where the manner in which a person using a device or complex system is recorded as a verification template. One potential use for this type of biometric signature is to distinguish among remote users of telerobotic surgery systems that utilize public networks for communication. Proposed requirement for certain public networks John Michael (Mike) McConnell, a former vice admiral in the United States Navy, a former director of U.S. National Intelligence, and senior vice president of Booz Allen Hamilton, promoted the development of a future capability to require biometric authentication to access certain public networks in his keynote speech at the 2009 Biometric Consortium Conference. A basic premise in the above proposal is that the person that has uniquely authenticated themselves using biometrics with the computer is in fact also the agent performing potentially malicious actions from that computer. However, if control of the computer has been subverted, for example in which the computer is part of a botnet controlled by a hacker, then knowledge of the identity of the user at the terminal does not materially improve network security or aid law enforcement activities. Animal biometrics Rather than tags or tattoos, biometric techniques may be used to identify individual animals: zebra stripes, blood vessel patterns in rodent ears, muzzle prints, bat wing patterns, primate facial recognition and koala spots have all been tried. Issues and concerns Human dignity Biometrics have been considered also instrumental to the development of state authority (to put it in Foucauldian terms, of discipline and biopower). By turning the human subject into a collection of biometric parameters, biometrics would dehumanize the person, infringe bodily integrity, and, ultimately, offend human dignity. In a well-known case, Italian philosopher Giorgio Agamben refused to enter the United States in protest at the United States Visitor and Immigrant Status Indicator (US-VISIT) program's requirement for visitors to be fingerprinted and photographed. Agamben argued that gathering of biometric data is a form of bio-political tattooing, akin to the tattooing of Jews during the Holocaust. According to Agamben, biometrics turn the human persona into a bare body. Agamben refers to the two words used by Ancient Greeks for indicating "life", zoe, which is the life common to animals and humans, just life; and bios, which is life in the human context, with meanings and purposes. Agamben envisages the reduction to bare bodies for the whole humanity. For him, a new bio-political relationship between citizens and the state is turning citizens into pure biological life (zoe) depriving them from their humanity (bios); and biometrics would herald this new world. In Dark Matters: On the Surveillance of Blackness, surveillance scholar Simone Browne formulates a similar critique as Agamben, citing a recent study relating to biometrics R&D that found that the gender classification system being researched "is inclined to classify Africans as males and Mongoloids as females." Consequently, Browne argues that the conception of an objective biometric technology is difficult if such systems are subjectively designed, and are vulnerable to cause errors as described in the study above. The stark expansion of biometric technologies in both the public and private sector magnifies this concern. The increasing commodification of biometrics by the private sector adds to this danger of loss of human value. Indeed, corporations value the biometric characteristics more than the individuals value them. Browne goes on to suggest that modern society should incorporate a "biometric consciousness" that "entails informed public debate around these technologies and their application, and accountability by the state and the private sector, where the ownership of and access to one's own body data and other intellectual property that is generated from one's body data must be understood as a right." Other scholars have emphasized, however, that the globalized world is confronted with a huge mass of people with weak or absent civil identities. Most developing countries have weak and unreliable documents and the poorer people in these countries do not have even those unreliable documents. Without certified personal identities, there is no certainty of right, no civil liberty. One can claim his rights, including the right to refuse to be identified, only if he is an identifiable subject, if he has a public identity. In such a sense, biometrics could play a pivotal role in supporting and promoting respect for human dignity and fundamental rights. Privacy and discrimination It is possible that data obtained during biometric enrollment may be used in ways for which the enrolled individual has not consented. For example, most biometric features could disclose physiological and/or pathological medical conditions (e.g., some fingerprint patterns are related to chromosomal diseases, iris patterns could reveal sex, hand vein patterns could reveal vascular diseases, most behavioral biometrics could reveal neurological diseases, etc.). Moreover, second generation biometrics, notably behavioral and electro-physiologic biometrics (e.g., based on electrocardiography, electroencephalography, electromyography), could be also used for emotion detection. There are three categories of privacy concerns: Unintended functional scope: The authentication goes further than authentication, such as finding a tumor. Unintended application scope: The authentication process correctly identifies the subject when the subject did not wish to be identified. Covert identification: The subject is identified without seeking identification or authentication, i.e. a subject's face is identified in a crowd. Danger to owners of secured items When thieves cannot get access to secure properties, there is a chance that the thieves will stalk and assault the property owner to gain access. If the item is secured with a biometric device, the damage to the owner could be irreversible, and potentially cost more than the secured property. For example, in 2005, Malaysian car thieves cut off a man's finger when attempting to steal his Mercedes-Benz S-Class. Attacks at presentation In the context of biometric systems, presentation attacks may also be called "spoofing attacks". As per the recent ISO/IEC 30107 standard, presentation attacks are defined as "presentation to the biometric capture subsystem with the goal of interfering with the operation of the biometric system". These attacks can be either impersonation or obfuscation attacks. Impersonation attacks try to gain access by pretending to be someone else. Obfuscation attacks may, for example, try to evade face detection and face recognition systems. Several methods have been proposed to counteract presentation attacks. Surveillance humanitarianism in times of crisis Biometrics are employed by many aid programs in times of crisis in order to prevent fraud and ensure that resources are properly available to those in need. Humanitarian efforts are motivated by promoting the welfare of individuals in need, however the use of biometrics as a form of surveillance humanitarianism can create conflict due to varying interests of the groups involved in the particular situation. Disputes over the use of biometrics between aid programs and party officials stalls the distribution of resources to people that need help the most. In July 2019, the United Nations World Food Program and Houthi Rebels were involved in a large dispute over the use of biometrics to ensure resources are provided to the hundreds of thousands of civilians in Yemen whose lives are threatened. The refusal to cooperate with the interests of the United Nations World Food Program resulted in the suspension of food aid to the Yemen population. The use of biometrics may provide aid programs with valuable information, however its potential solutions may not be best suited for chaotic times of crisis. Conflicts that are caused by deep-rooted political problems, in which the implementation of biometrics may not provide a long-term solution. Cancelable biometrics One advantage of passwords over biometrics is that they can be re-issued. If a token or a password is lost or stolen, it can be cancelled and replaced by a newer version. This is not naturally available in biometrics. If someone's face is compromised from a database, they cannot cancel or reissue it. If the electronic biometric identifier is stolen, it is nearly impossible to change a biometric feature. This renders the person's biometric feature questionable for future use in authentication, such as the case with the hacking of security-clearance-related background information from the Office of Personnel Management (OPM) in the United States. Cancelable biometrics is a way in which to incorporate protection and the replacement features into biometrics to create a more secure system. It was first proposed by Ratha et al. "Cancelable biometrics refers to the intentional and systematically repeatable distortion of biometric features in order to protect sensitive user-specific data. If a cancelable feature is compromised, the distortion characteristics are changed, and the same biometrics is mapped to a new template, which is used subsequently. Cancelable biometrics is one of the major categories for biometric template protection purpose besides biometric cryptosystem." In biometric cryptosystem, "the error-correcting coding techniques are employed to handle intraclass variations." This ensures a high level of security but has limitations such as specific input format of only small intraclass variations. Several methods for generating new exclusive biometrics have been proposed. The first fingerprint-based cancelable biometric system was designed and developed by Tulyakov et al. Essentially, cancelable biometrics perform a distortion of the biometric image or features before matching. The variability in the distortion parameters provides the cancelable nature of the scheme. Some of the proposed techniques operate using their own recognition engines, such as Teoh et al. and Savvides et al., whereas other methods, such as Dabbah et al., take the advantage of the advancement of the well-established biometric research for their recognition front-end to conduct recognition. Although this increases the restrictions on the protection system, it makes the cancellable templates more accessible for available biometric technologies Proposed soft biometrics Soft biometrics are understood as not strict biometrical recognition practices that are proposed in favour of identity cheaters and stealers. Traits are physical, behavioral or adhered human characteristics that have been derived from the way human beings normally distinguish their peers (e.g. height, gender, hair color). They are used to complement the identity information provided by the primary biometric identifiers. Although soft biometric characteristics lack the distinctiveness and permanence to recognize an individual uniquely and reliably, and can be easily faked, they provide some evidence about the users identity that could be beneficial. In other words, despite the fact they are unable to individualize a subject, they are effective in distinguishing between people. Combinations of personal attributes like gender, race, eye color, height and other visible identification marks can be used to improve the performance of traditional biometric systems. Most soft biometrics can be easily collected and are actually collected during enrollment. Two main ethical issues are raised by soft biometrics. First, some of soft biometric traits are strongly cultural based; e.g., skin colors for determining ethnicity risk to support racist approaches, biometric sex recognition at the best recognizes gender from tertiary sexual characters, being unable to determine genetic and chromosomal sexes; soft biometrics for aging recognition are often deeply influenced by ageist stereotypes, etc. Second, soft biometrics have strong potential for categorizing and profiling people, so risking of supporting processes of stigmatization and exclusion. Data protection of biometric data in international law Many countries, including the United States, are planning to share biometric data with other nations. In testimony before the US House Appropriations Committee, Subcommittee on Homeland Security on "biometric identification" in 2009, Kathleen Kraninger and Robert A Mocny commented on international cooperation and collaboration with respect to biometric data, as follows: According to an article written in 2009 by S. Magnuson in the National Defense Magazine entitled "Defense Department Under Pressure to Share Biometric Data" the United States has bilateral agreements with other nations aimed at sharing biometric data. To quote that article: Likelihood of full governmental disclosure Certain members of the civilian community are worried about how biometric data is used but full disclosure may not be forthcoming. In particular, the Unclassified Report of the United States' Defense Science Board Task Force on Defense Biometrics states that it is wise to protect, and sometimes even to disguise, the true and total extent of national capabilities in areas related directly to the conduct of security-related activities. This also potentially applies to Biometrics. It goes on to say that this is a classic feature of intelligence and military operations. In short, the goal is to preserve the security of 'sources and methods'. Countries applying biometrics Countries using biometrics include Australia, Brazil, Bulgaria, Canada, Cyprus, Greece, China, Gambia, Germany, India, Iraq, Ireland, Israel, Italy, Malaysia, Netherlands, New Zealand, Nigeria, Norway, Pakistan, Poland, South Africa, Saudi Arabia, Tanzania, Turkey, Ukraine, United Arab Emirates, United Kingdom, United States and Venezuela. Among low to middle income countries, roughly 1.2 billion people have already received identification through a biometric identification program. There are also numerous countries applying biometrics for voter registration and similar electoral purposes. According to the International IDEA's ICTs in Elections Database, some of the countries using (2017) Biometric Voter Registration (BVR) are Armenia, Angola, Bangladesh, Bhutan, Bolivia, Brazil, Burkina Faso, Cambodia, Cameroon, Chad, Colombia, Comoros, Congo (Democratic Republic of), Costa Rica, Ivory Coast, Dominican Republic, Fiji, Gambia, Ghana, Guatemala, India, Iraq, Kenya, Lesotho, Liberia, Malawi, Mali, Mauritania, Mexico, Morocco, Mozambique, Namibia, Nepal, Nicaragua, Nigeria, Panama, Peru, Philippines, Senegal, Sierra Leone, Solomon Islands, Somaliland, Swaziland, Tanzania, Uganda, Uruguay, Venezuela, Yemen, Zambia, and Zimbabwe. India's national ID program India's national ID program called Aadhaar is the largest biometric database in the world. It is a biometrics-based digital identity assigned for a person's lifetime, verifiable online instantly in the public domain, at any time, from anywhere, in a paperless way. It is designed to enable government agencies to deliver a retail public service, securely based on biometric data (fingerprint, iris scan and face photo), along with demographic data (name, age, gender, address, parent/spouse name, mobile phone number) of a person. The data is transmitted in encrypted form over the internet for authentication, aiming to free it from the limitations of physical presence of a person at a given place. About 550 million residents have been enrolled and assigned 480 million Aadhaar national identification numbers as of 7 November 2013. It aims to cover the entire population of 1.2 billion in a few years. However, it is being challenged by critics over privacy concerns and possible transformation of the state into a surveillance state, or into a Banana republic.§ The project was also met with mistrust regarding the safety of the social protection infrastructures. To tackle the fear amongst the people, India's supreme court put a new ruling into action that stated that privacy from then on was seen as a fundamental right. On 24 August 2017 this new law was established. Malaysia's MyKad national ID program The current identity card, known as MyKad, was introduced by the National Registration Department of Malaysia on 5 September 2001 with Malaysia becoming the first country in the world to use an identification card that incorporates both photo identification and fingerprint biometric data on a built-in computer chip embedded in a piece of plastic. Besides the main purpose of the card as a validation tool and proof of citizenship other than the birth certificate, MyKad also serves as a valid driver's license, an ATM card, an electronic purse, and a public key, among other applications, as part of the Malaysian Government Multipurpose Card (GMPC) initiative, if the bearer chooses to activate the functions. See also Access control AFIS AssureSign BioAPI Biometrics in schools European Association for Biometrics Fingerprint recognition Fuzzy extractor Gait analysis Government database Handwritten biometric recognition Identity Cards Act 2006 International Identity Federation Keystroke dynamics Multiple Biometric Grand Challenge Private biometrics Retinal scan Signature recognition Smart city Speaker recognition Vein matching Voice analysis Notes References Further reading Biometrics Glossary – Glossary of Biometric Terms based on information derived from the National Science and Technology Council (NSTC) Subcommittee on Biometrics. Published by Fulcrum Biometrics, LLC, July 2013 Biometrics Institute - Explanatory Dictionary of Biometrics A glossary of biometrics terms, offering detailed definitions to supplement existing resources. Published May 2023. Delac, K., Grgic, M. (2004). A Survey of Biometric Recognition Methods. "Fingerprints Pay For School Lunch". (2001). Retrieved 2008-03-02. "Germany to phase-in biometric passports from November 2005". (2005). E-Government News. Retrieved 2006-06-11. Oezcan, V. (2003). "Germany Weighs Biometric Registration Options for Visa Applicants", Humboldt University Berlin. Retrieved 2006-06-11. Ulrich Hottelet: Hidden champion – Biometrics between boom and big brother, German Times, January 2007. Dunstone, T. and Yager, N., 2008. Biometric system and data analysis. 1st ed. New York: Springer. External links Surveillance Authentication methods Identification
0.77386
0.997356
0.771814
Solid-state chemistry
Solid-state chemistry, also sometimes referred as materials chemistry, is the study of the synthesis, structure, and properties of solid phase materials. It therefore has a strong overlap with solid-state physics, mineralogy, crystallography, ceramics, metallurgy, thermodynamics, materials science and electronics with a focus on the synthesis of novel materials and their characterization. A diverse range of synthetic techniques, such as the ceramic method and chemical vapour depostion, make solid-state materials. Solids can be classified as crystalline or amorphous on basis of the nature of order present in the arrangement of their constituent particles. Their elemental compositions, microstructures, and physical properties can be characterized through a variety of analytical methods. History Because of its direct relevance to products of commerce, solid state inorganic chemistry has been strongly driven by technology. Progress in the field has often been fueled by the demands of industry, sometimes in collaboration with academia. Applications discovered in the 20th century include zeolite and platinum-based catalysts for petroleum processing in the 1950s, high-purity silicon as a core component of microelectronic devices in the 1960s, and “high temperature” superconductivity in the 1980s. The invention of X-ray crystallography in the early 1900s by William Lawrence Bragg was an enabling innovation. Our understanding of how reactions proceed at the atomic level in the solid state was advanced considerably by Carl Wagner's work on oxidation rate theory, counter diffusion of ions, and defect chemistry. Because of his contributions, he has sometimes been referred to as the father of solid state chemistry. Synthetic methods Given the diversity of solid-state compounds, an equally diverse array of methods are used for their preparation. Synthesis can range from high-temperature methods, like the ceramic method, to gas methods, like chemical vapour deposition. Often, the methods prevent defect formation or produce high-purity products. High-temperature methods Ceramic method The ceramic method is one of the most common synthesis techniques. The synthesis occurs entirely in the solid state.  The reactants are ground together, formed into a pellet using a pellet press and hydraulic press, and heated at high temperatures. When the temperature of the reactants are sufficient, the ions at the grain boundaries react to form desired phases. Generally ceramic methods give polycrystalline powders, but not single crystals. Using a mortar and pestle or ball mill, the reactants are ground together, which decreases size and increases surface area of the reactants. If the mixing is not sufficient, we can use techniques such as co-precipitation and sol-gel. A chemist forms pellets from the ground reactants and places the pellets into containers for heating. The choice of container depends on the precursors, the reaction temperature and the expected product. For example, metal oxides are typically synthesized in silica or alumina containers. A tube furnace heats the pellet. Tube furnaces are available up to maximum temperatures of 2800oC. Molten flux synthesis Molten flux synthesis can be an efficient method for obtaining single crystals. In this method, the starting reagents are combined with flux, an inert material with a melting point lower than that of the starting materials. The flux serves as a solvent. After the reaction, the excess flux can be washed away using an appropriate solvent or it can be heat again to remove the flux by sublimation if it is a volatile compound. Crucible materials have a great role to play in molten flux synthesis. The crucible should not react with the flux or the starting reagent. If any of the material is volatile, it is recommended to conduct the reaction in a sealed ampule. If the target phase is sensitive to oxygen, a carbon- coated fused silica tube or a carbon crucible inside a fused silica tube is often used which prevents the direct contact between the tube wall and reagents. Chemical vapour transport Chemical vapour transport results in very pure materials. The reaction typically occurs in a sealed ampoule. A transporting agent, added to the sealed ampoule, produces a volatile intermediate species from the solid reactant. For metal oxides, the transporting agent is usually Cl2 or HCl. The ampoule has a temperature gradient, and, as the gaseous reactant travels along the gradient, it eventually deposits as a crystal. An example of an industrially-used chemical vapor transport reaction is the Mond process. The Mond process involves heating impure nickel in a stream of carbon monoxide to produce pure nickel. Low-temperature methods Intercalation method Intercalation synthesis is the insertion of molecules or ions between layers of a solid. The layered solid has weak intermolecular bonds holding its layers together. The process occurs via diffusion. Intercalation is further driven by ion exchange, acid-base reactions or electrochemical reactions. The intercalation method was first used in China with the discovery of porcelain. Also, graphene is produced by the intercalation method, and this method is the principle behind lithium-ion batteries. Solution methods It is possible to use solvents to prepare solids by precipitation or by evaporation. At times, the solvent is a hydrothermal that is under pressure at temperatures higher than the normal boiling point. A variation on this theme is the use of flux methods, which use a salt with a relatively low melting point as the solvent. Gas methods Many solids react vigorously with gas species like chlorine, iodine, and oxygen. Other solids form adducts, such as CO or ethylene. Such reactions are conducted in open-ended tubes, which the gasses are passed through. Also, these reactions can take place inside a measuring device such as a TGA. In that case, stoichiometric information can be obtained during the reaction, which helps identify the products. Chemical vapour deposition Chemical vapour deposition is a method widely used for the preparation of coatings and semiconductors from molecular precursors. A carrier gas transports the gaseous precursors to the material for coating. Characterization This is the process in which a material’s chemical composition, structure, and physical properties are determined using a variety of analytical techniques. New phases Synthetic methodology and characterization often go hand in hand in the sense that not one but a series of reaction mixtures are prepared and subjected to heat treatment. Stoichiometry, a numerical relationship between the quantities of reactant and product, is typically varied systematically. It is important to find which stoichiometries will lead to new solid compounds or solid solutions between known ones. A prime method to characterize the reaction products is powder diffraction because many solid-state reactions will produce polycrystalline molds or powders. Powder diffraction aids in the identification of known phases in the mixture. If a pattern is found that is not known in the diffraction data libraries, an attempt can be made to index the pattern. The characterization of a material's properties is typically easier for a product with crystalline structures. Compositions and structures Once the unit cell of a new phase is known, the next step is to establish the stoichiometry of the phase. This can be done in several ways. Sometimes the composition of the original mixture will give a clue, under the circumstances that only a product with a single powder pattern is found or a phase of a certain composition is made by analogy to known material, but this is rare. Often, considerable effort in refining the synthetic procedures is required to obtain a pure sample of the new material. If it is possible to separate the product from the rest of the reaction mixture, elemental analysis methods such as scanning electron microscopy (SEM) and transmission electron microscopy (TEM) can be used. The detection of scattered and transmitted electrons from the surface of the sample provides information about the surface topography and composition of the material. Energy dispersive X-ray spectroscopy (EDX) is a technique that uses electron beam excitation. Exciting the inner shell of an atom with incident electrons emits characteristic X-rays with specific energy to each element. The peak energy can identify the chemical composition of a sample, including the distribution and concentration.Similar to EDX, X-ray diffraction analysis (XRD) involves the generation of characteristic X-rays upon interaction with the sample. The intensity of diffracted rays scattered at different angles is used to analyze the physical properties of a material such as phase composition and crystallographic structure. These techniques can also be coupled to achieve a better effect. For example, SEM is a useful complement to EDX due to its focused electron beam, it produces a high-magnification image that provides information on the surface topography. Once the area of interest has been identified, EDX can be used to determine the elements present in that specific spot. Selected area electron diffraction can be coupled with TEM or SEM to investigate the level of crystallinity and the lattice parameters of a sample. More information X-ray diffraction is also used due to its imaging capabilities and speed of data generation. The latter often requires revisiting and refining the preparative procedures and that are linked to the question of which phases are stable at what composition and what stoichiometry. In other words, what the phase diagram looks like. An important tool in establishing this are thermal analysis techniques like DSC or DTA and increasingly also, due to the advent of synchrotrons, temperature-dependent powder diffraction. Increased knowledge of the phase relations often leads to further refinement in synthetic procedures in an iterative way. New phases are thus characterized by their melting points and their stoichiometric domains. The latter is important for the many solids that are non-stoichiometric compounds. The cell parameters obtained from XRD are particularly helpful to characterize the homogeneity ranges of the latter. Local structure In contrast to the large structures of crystals, the local structure describes the interaction of the nearest neighbouring atoms. Methods of nuclear spectroscopy use specific nuclei to probe the electric and magnetic fields around the nucleus. E.g. electric field gradients are very sensitive to small changes caused by lattice expansion/compression (thermal or pressure), phase changes, or local defects. Common methods are Mössbauer spectroscopy and perturbed angular correlation. Optical properties For metallic materials, their optical properties arise from the collective excitation of conduction electrons. The coherent oscillations of electrons under electromagnetic radiation along with associated oscillations of the electromagnetic field are called surface plasmon resonances. The excitation wavelength and frequency of the plasmon resonances provide information on the particle's size, shape, composition, and local optical environment. For non-metallic materials or semiconductors, they can be characterized by their band structure. It contains a band gap that represents the minimum energy difference between the top of the valence band and the bottom of the conduction band. The band gap can be determined using Ultraviolet-visible spectroscopy to predict the photochemical properties of the semiconductors. Further characterization In many cases, new solid compounds are further characterized by a variety of techniques that straddle the fine line that separates solid-state chemistry from solid-state physics. See Characterisation in material science for additional information. References External links , Sadoway, Donald. 3.091SC; Introduction to Solid State Chemistry, Fall 2010. (Massachusetts Institute of Technology: MIT OpenCourseWare) Materials science
0.783869
0.98462
0.771812
Feasibility study
A feasibility study is an assessment of the practicality of a project or system. A feasibility study aims to objectively and rationally uncover the strengths and weaknesses of an existing business or proposed venture, opportunities and threats present in the natural environment, the resources required to carry through, and ultimately the prospects for success. In its simplest terms, the two criteria to judge feasibility are cost required and value to be attained. A well-designed feasibility study should provide a historical background of the business or project, a description of the product or service, accounting statements, details of the operations and management, marketing research and policies, financial data, legal requirements and tax obligations. Generally, feasibility studies precede technical development and project implementation. A feasibility study evaluates the project's potential for success; therefore, perceived objectivity is an important factor in the credibility of the study for potential investors and lending institutions. It must therefore be conducted with an objective, unbiased approach to provide information upon which decisions can be based. Formal definition A project feasibility study is a comprehensive report that examines in detail the five frames of analysis of a given project. It also takes into consideration its four Ps, its risks and POVs, and its constraints (calendar, costs, and norms of quality). The goal is to determine whether the project should go ahead, be redesigned, or else abandoned altogether. The five frames of analysis are: The frame of definition; the frame of contextual risks; the frame of potentiality; the parametric frame; the frame of dominant and contingency strategies. The four Ps are traditionally defined as Plan, Processes, People, and Power. The risks are considered to be external to the project (e.g., weather conditions) and are divided in eight categories: (Plan) financial and organizational (e.g., government structure for a private project); (Processes) environmental and technological; (People) marketing and sociocultural; and (Power) legal and political. POVs are Points of Vulnerability: they differ from risks in the sense that they are internal to the project and can be controlled or else eliminated. The constraints are the standard constraints of calendar, costs and norms of quality that can each be objectively determined and measured along the entire project lifecycle. Depending on projects, portions of the study may suffice to produce a feasibility study; smaller projects, for example, may not require an exhaustive environmental assessment. Common factors TELOS is an acronym in project management used to define five areas of feasibility that determine whether a project should run or not. T - Technical — Is the project technically possible? E - Economic — Can the project be afforded? Will it increase profit? L - Legal — Is the project legal? O - Operational — How will the current operations support the change? S - Scheduling — Can the project be done in time? Technical feasibility This assessment is based on an outline design of system requirements, to determine whether the company has the technical expertise to handle completion of the project. When writing a feasibility report, the following should be taken to consideration: A brief description of the business to assess more possible factors which could affect the study The part of the business being examined The human and economic factor The possible solutions to the problem At this level, the concern is whether the proposal is both technically and legally feasible (assuming moderate cost). The technical feasibility assessment is focused on gaining an understanding of the present technical resources of the organization and their applicability to the expected needs of the proposed system. It is an evaluation of the hardware and software and how it meets the need of the proposed system Method of production The selection among a number of methods to produce the same commodity should be undertaken first. Factors that make one method being preferred to other method in agricultural projects are the following: Availability of inputs or raw materials and their quality and prices. Availability of markets for outputs of each method and the expected prices for these outputs. Various efficiency factors such as the expected increase in one additional unit of fertilizer or productivity of a specified crop per one thing Production technique After we determine the appropriate method of production of a commodity, it is necessary to look for the optimal technique to produce this commodity. Project requirements Once the method of production and its technique are determined, technical people have to determine the projects' requirements during the investment and operating periods. These include: Determination of tools and equipment needed for the project such as drinkers and feeders or pumps or pipes ...etc. Determination of projects' requirements of constructions such as buildings, storage, and roads ...etc. in addition to internal designs for these requirements. Determination of projects' requirements of skilled and unskilled labor and managerial and financial labor. Determination of construction period concerning the costs of designs and consultations and the costs of constructions and other tools. Determination of minimum storage of inputs, cash money to cope with operating and contingency costs. Project location The most important factors that determine the selection of project location are the following: Availability of land (proper acreage and reasonable costs). The impact of the project on the environment and the approval of the concerned institutions for license. The costs of transporting inputs and outputs to the project's location (i.e., the distance from the markets). Availability of various services related to the project such as availability of extension services or veterinary or water or electricity or good roads ...etc. Legal feasibility It determines whether the proposed system conflicts with legal requirements, e.g., a data processing system must comply with the local data protection regulations and if the proposed venture is acceptable in accordance to the laws of the land. Operational feasibility study Operational feasibility is the measure of how well a proposed system solves problems and takes advantage of the opportunities identified during scope definition and how it satisfies the requirements identified in the requirements analysis phase of system development. The operational feasibility assessment focuses on the degree to which the proposed development project fits in with the existing business environment and objectives about the development schedule, delivery date, corporate culture and existing business processes. To ensure success, desired operational outcomes must be imparted during design and development. These include such design-dependent parameters as reliability, maintainability, supportability, usability, producibility, disposability, sustainability, affordability, etc. These parameters are required to be considered at the early stages of the design if desired operational behaviours are to be realised. A system design and development requires appropriate and timely application of engineering and management efforts to meet the previously mentioned parameters. A system may serve its intended purpose most effectively when its technical and operating characteristics are engineered into the design. Therefore, operational feasibility is a critical aspect of systems engineering that must be integral to the early design phases. Time feasibility A time feasibility study will take into account the period in which the project is going to take up to its completion. A project will fail if it takes too long to be completed before it is useful. Typically this means estimating how long the system will take to develop, and if it can be completed in a given time period using some methods like payback period. Time feasibility is a measure of how reasonable the project timetable is. Given our technical expertise, are the project deadlines reasonable? Some projects are initiated with specific deadlines. It is necessary to determine whether the deadlines are mandatory or desirable. Other feasibility factors Resource feasibility Describe how much time is available to build the new system, when it can be built, whether it interferes with normal business operations, type and amount of resources required, dependencies, and developmental procedures with company revenue prospectus. Financial feasibility In case of a new project, financial viability can be judged on the following parameters: Total estimated cost of the project Financing of the project in terms of its capital structure, debt to equity ratio and promoter's share of total cost Existing investment by the promoter in any other business Projected cash flow and profitability The financial viability of a project should provide the following information: Full details of the assets to be financed and how liquid those assets are. Rate of conversion to cash-liquidity (i.e., how easily the various assets can be converted to cash). Project's funding potential and repayment terms. Sensitivity in the repayments capability to the following factors: Mild slowing of sales. Acute reduction/slowing of sales. Small increase in cost. Large increase in cost. Adverse economic conditions. In 1983 the first generation of the Computer Model for Feasibility Analysis and Reporting (COMFAR), a computation tool for financial analysis of investments, was released. Since then, this United Nations Industrial Development Organization (UNIDO) software has been developed to also support the economic appraisal of projects. The COMFAR III Expert is intended as an aid in the analysis of investment projects. The main module of the program accepts financial and economic data, produces financial and economic statements and graphical displays and calculates measures of performance. Supplementary modules assist in the analytical process. Cost-benefit and value-added methods of economic analysis developed by UNIDO are included in the program and the methods of major international development institutions are accommodated. The program is applicable for the analysis of investment in new projects and expansion or rehabilitation of existing enterprises as, e.g., in the case of reprivatisation projects. For joint ventures, the financial perspective of each partner or class of shareholder can be developed. Analysis can be performed under a variety of assumptions concerning inflation, currency revaluation and price escalations. Market research Market research studies is one of the most important sections of the feasibility study as it examines the marketability of the product or service and convinces readers that there is a potential market for the product or service. If a significant market for the product or services cannot be established, then there is no project. Typically, market studies will assess the potential sales of the product, absorption and market capture rates and the project's timing. The feasibility study outputs the feasibility study report, a report detailing the evaluation criteria, the study findings, and the recommendations. See also Project appraisal Environmental impact Mining feasibility study Proof of concept SWOT analysis References Further reading Matson, James. "Cooperative Feasibility Study Guide" , United States Department of Agriculture, Rural Business-Cooperative Service. October 2000. https://pilotandfeasibilitystudies.qmul.ac.uk/ External links Hoagland & Williamson 2000 United Nations Industrial Development Organization (UNIDO) Matson Allan Thompson 2003 Business process management Evaluation methods Project management
0.776078
0.994424
0.77175
Anfinsen's dogma
Anfinsen's dogma, also known as the thermodynamic hypothesis, is a postulate in molecular biology. It states that, at least for a small globular protein in its standard physiological environment, the native structure is determined only by the protein's amino acid sequence. The dogma was championed by the Nobel Prize Laureate Christian B. Anfinsen from his research on the folding of ribonuclease A. The postulate amounts to saying that, at the environmental conditions (temperature, solvent concentration and composition, etc.) at which folding occurs, the native structure is a unique, stable and kinetically accessible minimum of the free energy. In other words, there are three conditions for formation of a unique protein structure: Uniqueness – Requires that the sequence does not have any other configuration with a comparable free energy. Hence the free energy minimum must be unchallenged. Stability – Small changes in the surrounding environment cannot give rise to changes in the minimum configuration. This can be pictured as a free energy surface that looks more like a funnel (with the native state in the bottom of it) rather than like a soup plate (with several closely related low-energy states); the free energy surface around the native state must be rather steep and high, in order to provide stability. Kinetical accessibility – Means that the path in the free energy surface from the unfolded to the folded state must be reasonably smooth or, in other words, that the folding of the chain must not involve highly complex changes in the shape (like knots or other high order conformations). Basic changes in the shape of the protein happen dependent on their environment, shifting shape to suit their place. This creates multiple configurations for biomolecules to shift into. Challenges to Anfinsen's dogma Protein folding in a cell is a highly complex process that involves transport of the newly synthesized proteins to appropriate cellular compartments through targeting, permanent misfolding, temporarily unfolded states, post-translational modifications, quality control, and formation of protein complexes facilitated by chaperones. Some proteins need the assistance of chaperone proteins to fold properly. It has been suggested that this disproves Anfinsen's dogma. However, the chaperones do not appear to affect the final state of the protein; they seem to work primarily by preventing aggregation of several protein molecules prior to the final folded state of the protein. However, at least some chaperones are required for the proper folding of their subject proteins. Many proteins can also undergo aggregation and misfolding. For example, prions are stable conformations of proteins which differ from the native folding state. In bovine spongiform encephalopathy, native proteins re-fold into a different stable conformation, which causes fatal amyloid buildup. Other amyloid diseases, including Alzheimer's disease and Parkinson's disease, are also exceptions to Anfinsen's dogma. Some proteins have multiple native structures, and change their fold based on some external factors. For example, the KaiB protein complex switches fold throughout the day, acting as a clock for cyanobacteria. It has been estimated that around 0.5–4% of PDB proteins switch folds. The switching between alternative structures is driven by interactions of the protein with small ligands or other proteins, by chemical modifications (such as phosphorylation) or by changed environmental conditions, such as temperature, pH or membrane potential. Each alternative structure may either correspond to the global minimum of free energy of the protein at the given conditions or be kinetically trapped in a higher local minimum of free energy. References Further reading Profiles in Science: The Christian B. Anfinsen Papers-Articles Molecular biology Protein structure Hypotheses
0.791309
0.975266
0.771737
The central science
Chemistry is often called the central science because of its role in connecting the physical sciences, which include chemistry, with the life sciences, pharmaceutical sciences and applied sciences such as medicine and engineering. The nature of this relationship is one of the main topics in the philosophy of chemistry and in scientometrics. The phrase was popularized by its use in a textbook by Theodore L. Brown and H. Eugene LeMay, titled Chemistry: The Central Science, which was first published in 1977, with a fifteenth edition published in 2021. The central role of chemistry can be seen in the systematic and hierarchical classification of the sciences by Auguste Comte. Each discipline provides a more general framework for the area it precedes (mathematics → astronomy → physics → chemistry → biology → social sciences). Balaban and Klein have more recently proposed a diagram showing the partial ordering of sciences in which chemistry may be argued is "the central science" since it provides a significant degree of branching. In forming these connections the lower field cannot be fully reduced to the higher ones. It is recognized that the lower fields possess emergent ideas and concepts that do not exist in the higher fields of science. Thus chemistry is built on an understanding of laws of physics that govern particles such as atoms, protons, neutrons, electrons, thermodynamics, etc. although it has been shown that it has not been "fully 'reduced' to quantum mechanics". Concepts such as the periodicity of the elements and chemical bonds in chemistry are emergent in that they are more than the underlying forces defined by physics. In the same way, biology cannot be fully reduced to chemistry, although the machinery that is responsible for life is composed of molecules. For instance, the machinery of evolution may be described in terms of chemistry by the understanding that it is a mutation in the order of genetic base pairs in the DNA of an organism. However, chemistry cannot fully describe the process since it does not contain concepts such as natural selection that are responsible for driving evolution. Chemistry is fundamental to biology since it provides a methodology for studying and understanding the molecules that compose cells. Connections made by chemistry are formed through various sub-disciplines that utilize concepts from multiple scientific disciplines. Chemistry and physics are both needed in the areas of physical chemistry, nuclear chemistry, and theoretical chemistry. Chemistry and biology intersect in the areas of biochemistry, medicinal chemistry, molecular biology, chemical biology, molecular genetics, and immunochemistry. Chemistry and the earth sciences intersect in areas like geochemistry and hydrology. See also Fundamental science Hard and soft science Philosophy of chemistry Special sciences Unity of science References zh-yue:中心科學 Chemistry Philosophy of science Emergence
0.786456
0.981283
0.771735
Disproportionation
In chemistry, disproportionation, sometimes called dismutation, is a redox reaction in which one compound of intermediate oxidation state converts to two compounds, one of higher and one of lower oxidation state. The reverse of disproportionation, such as when a compound in an intermediate oxidation state is formed from precursors of lower and higher oxidation states, is called comproportionation, also known as symproportionation. More generally, the term can be applied to any desymmetrizing reaction where two molecules of one type react to give one each of two different types: This expanded definition is not limited to redox reactions, but also includes some molecular autoionization reactions, such as the self-ionization of water. In contrast, some authors use the term redistribution to refer to reactions of this type (in either direction) when only ligand exchange but no redox is involved and distinguish such processes from disproportionation and comproportionation.For example, the Schlenk equilibrium is an example of a redistribution reaction. History The first disproportionation reaction to be studied in detail was: This was examined using tartrates by Johan Gadolin in 1788. In the Swedish version of his paper he called it . Examples Mercury(I) chloride disproportionates upon UV-irradiation: Phosphorous acid disproportionates upon heating to 200°C to give phosphoric acid and phosphine: Desymmetrizing reactions are sometimes referred to as disproportionation, as illustrated by the thermal degradation of bicarbonate: The oxidation numbers remain constant in this acid-base reaction. Another variant on disproportionation is radical disproportionation, in which two radicals form an alkene and an alkane. {2CH3-\underset{^\bullet}CH2 -> {H2C=CH2} + H3C-CH3} Disproportionation of sulfur intermediates by microorganisms is widely observed in sediments. Chlorine gas reacts with concentrated sodium hydroxide to form sodium chloride, sodium chlorate and water. The ionic equation for this reaction is as follows: The chlorine reactant is in oxidation state 0. In the products, the chlorine in the Cl− ion has an oxidation number of −1, having been reduced, whereas the oxidation number of the chlorine in the ion is +5, indicating that it has been oxidized. Decomposition of numerous interhalogen compounds involve disproportionation. Bromine fluoride undergoes a disproportionation reaction to form bromine trifluoride and bromine in non-aqueous media: The dismutation of superoxide free radical to hydrogen peroxide and oxygen, catalysed in living systems by the enzyme superoxide dismutase: The oxidation state of oxygen is − in the superoxide free radical anion, −1 in hydrogen peroxide and 0 in dioxygen. In the Cannizzaro reaction, an aldehyde is converted into an alcohol and a carboxylic acid. In the related Tishchenko reaction, the organic redox reaction product is the corresponding ester. In the Kornblum–DeLaMare rearrangement, a peroxide is converted to a ketone and an alcohol. The disproportionation of hydrogen peroxide into water and oxygen catalysed by either potassium iodide or the enzyme catalase: In the Boudouard reaction, carbon monoxide disproportionates to carbon and carbon dioxide. The reaction is for example used in the HiPco method for producing carbon nanotubes; high-pressure carbon monoxide disproportionates when catalysed on the surface of an iron particle: Nitrogen has oxidation state +4 in nitrogen dioxide, but when this compound reacts with water, it forms both nitric acid and nitrous acid, where nitrogen has oxidation states +5 and +3 respectively: In hydrazoic acid and sodium azide, each of the 3 nitrogen atoms of these very energetic linear polyatomic species has an oxidation state of −. These unstable and highly toxic compounds will disproportionate in aqueous solution to form gaseous nitrogen and ammonium ions, or ammonia, depending on pH conditions, as it can be conveniently verified by means of the Frost diagram for nitrogen: Under acidic conditions, hydrazoic acid disproportionates as: Under neutral, or basic, conditions, the azide anion disproportionates as: Dithionite undergoes acid hydrolysis to thiosulfate and bisulfite: Dithionite also undergoes alkaline hydrolysis to sulfite and sulfide: Dithionate is prepared on a larger scale by oxidizing a cooled aqueous solution of sulfur dioxide with manganese dioxide: Polymer chemistry In free-radical chain-growth polymerization, chain termination can occur by a disproportionation step in which a hydrogen atom is transferred from one growing chain molecule to another one, which produces two dead (non-growing) chains. Chain—CH2–CHX• + Chain—CH2–CHX• → Chain—CH=CHX + Chain—CH2–CH2X in which, Chain— represents the already formed polymer chain, and • indicates a reactive free radical. Biochemistry In 1937, Hans Adolf Krebs, who discovered the citric acid cycle bearing his name, confirmed the anaerobic dismutation of pyruvic acid into lactic acid, acetic acid and CO2 by certain bacteria according to the global reaction: The dismutation of pyruvic acid in other small organic molecules (ethanol + CO2, or lactate and acetate, depending on the environmental conditions) is also an important step in fermentation reactions. Fermentation reactions can also be considered as disproportionation or dismutation biochemical reactions. Indeed, the donor and acceptor of electrons in the redox reactions supplying the chemical energy in these complex biochemical systems are the same organic molecules simultaneously acting as reductant or oxidant. Another example of biochemical dismutation reaction is the disproportionation of acetaldehyde into ethanol and acetic acid. While in respiration electrons are transferred from substrate (electron donor) to an electron acceptor, in fermentation part of the substrate molecule itself accepts the electrons. Fermentation is therefore a type of disproportionation, and does not involve an overall change in oxidation state of the substrate. Most of the fermentative substrates are organic molecules. However, a rare type of fermentation may also involve the disproportionation of inorganic sulfur compounds in certain sulfate-reducing bacteria. Disproportionation of sulfur intermediates Sulfur isotopes of sediments are often measured for studying environments in the Earth's past (paleoenvironment). Disproportionation of sulfur intermediates, being one of the processes affecting sulfur isotopes of sediments, has drawn attention from geoscientists for studying the redox conditions in the oceans in the past. Sulfate-reducing bacteria fractionate sulfur isotopes as they take in sulfate and produce sulfide. Prior to 2010s, it was thought that sulfate reduction could fractionate sulfur isotopes up to 46 ‰ and fractionation larger than 46 ‰ recorded in sediments must be due to disproportionation of sulfur intermediates in the sediment. This view has changed since the 2010s. As substrates for disproportionation are limited by the product of sulfate reduction, the isotopic effect of disproportionation should be less than 16 ‰ in most sedimentary settings. Disproportionation can be carried out by microorganisms obligated to disproportionation or microorganisms that can carry out sulfate reduction as well. Common substrates for disproportionation include elemental sulfur, thiosulfate and sulfite. Claus reaction: a comproportionation reaction The Claus reaction is an example of comproportionation reaction (the inverse of disproportionation) involving hydrogen sulfide and sulfur dioxide to produce elemental sulfur and water as follows: The Claus reaction is one of the chemical reactions involved in the Claus process used for the desulfurization of gases in the oil refinery plants and leading to the formation of solid elemental sulfur, which is easier to store, transport, reuse when possible, and dispose of. See also Dismutase Oxidoreductase Fermentation (biochemistry) References Chemical reactions Chemical processes Organic reactions Biochemistry
0.780509
0.988706
0.771694
Pentose phosphate pathway
The pentose phosphate pathway (also called the phosphogluconate pathway and the hexose monophosphate shunt or HMP shunt) is a metabolic pathway parallel to glycolysis. It generates NADPH and pentoses (five-carbon sugars) as well as ribose 5-phosphate, a precursor for the synthesis of nucleotides. While the pentose phosphate pathway does involve oxidation of glucose, its primary role is anabolic rather than catabolic. The pathway is especially important in red blood cells (erythrocytes). The reactions of the pathway were elucidated in the early 1950s by Bernard Horecker and co-workers. There are two distinct phases in the pathway. The first is the oxidative phase, in which NADPH is generated, and the second is the non-oxidative synthesis of five-carbon sugars. For most organisms, the pentose phosphate pathway takes place in the cytosol; in plants, most steps take place in plastids. Like glycolysis, the pentose phosphate pathway appears to have a very ancient evolutionary origin. The reactions of this pathway are mostly enzyme catalyzed in modern cells, however, they also occur non-enzymatically under conditions that replicate those of the Archean ocean, and are catalyzed by metal ions, particularly ferrous ions (Fe(II)). This suggests that the origins of the pathway could date back to the prebiotic world. Outcome The primary results of the pathway are: The generation of reducing equivalents, in the form of NADPH, used in reductive biosynthesis reactions within cells (e.g. fatty acid synthesis). Production of ribose 5-phosphate (R5P), used in the synthesis of nucleotides and nucleic acids. Production of erythrose 4-phosphate (E4P) used in the synthesis of aromatic amino acids. Aromatic amino acids, in turn, are precursors for many biosynthetic pathways, including the lignin in wood. Dietary pentose sugars derived from the digestion of nucleic acids may be metabolized through the pentose phosphate pathway, and the carbon skeletons of dietary carbohydrates may be converted into glycolytic/gluconeogenic intermediates. In mammals, the PPP occurs exclusively in the cytoplasm. In humans, it is found to be most active in the liver, mammary glands, and adrenal cortex. The PPP is one of the three main ways the body creates molecules with reducing power, accounting for approximately 60% of NADPH production in humans. One of the uses of NADPH in the cell is to prevent oxidative stress. It reduces glutathione via glutathione reductase, which converts reactive H2O2 into H2O by glutathione peroxidase. If absent, the H2O2 would be converted to hydroxyl free radicals by Fenton chemistry, which can attack the cell. Erythrocytes, for example, generate a large amount of NADPH through the pentose phosphate pathway to use in the reduction of glutathione. Hydrogen peroxide is also generated for phagocytes in a process often referred to as a respiratory burst. Phases Oxidative phase In this phase, two molecules of NADP+ are reduced to NADPH, utilizing the energy from the conversion of glucose-6-phosphate into ribulose 5-phosphate. The entire set of reactions can be summarized as follows: The overall reaction for this process is: Glucose 6-phosphate + 2 NADP+ + H2O → ribulose 5-phosphate + 2 NADPH + 2 H+ + CO2 Non-oxidative phase Net reaction: 3 ribulose-5-phosphate → 1 ribose-5-phosphate + 2 xylulose-5-phosphate → 2 fructose-6-phosphate + glyceraldehyde-3-phosphate Regulation Glucose-6-phosphate dehydrogenase is the rate-controlling enzyme of this pathway. It is allosterically stimulated by NADP+ and strongly inhibited by NADPH. The ratio of NADPH:NADP+ is the primary mode of regulation for the enzyme and is normally about 100:1 in liver cytosol. This makes the cytosol a highly-reducing environment. An NADPH-utilizing pathway forms NADP+, which stimulates Glucose-6-phosphate dehydrogenase to produce more NADPH. This step is also inhibited by acetyl CoA. G6PD activity is also post-translationally regulated by cytoplasmic deacetylase SIRT2. SIRT2-mediated deacetylation and activation of G6PD stimulates oxidative branch of PPP to supply cytosolic NADPH to counteract oxidative damage or support de novo lipogenesis. Erythrocytes Several deficiencies in the level of activity (not function) of glucose-6-phosphate dehydrogenase have been observed to be associated with resistance to the malarial parasite Plasmodium falciparum among individuals of Mediterranean and African descent. The basis for this resistance may be a weakening of the red cell membrane (the erythrocyte is the host cell for the parasite) such that it cannot sustain the parasitic life cycle long enough for productive growth. See also G6PD deficiency – A hereditary disease that disrupts the pentose phosphate pathway RNA Thiamine deficiency Frank Dickens FRS References External links The chemical logic behind the pentose phosphate pathway Pentose phosphate pathway Map – Homo sapiens
0.774263
0.996658
0.771676
Infrared spectroscopy correlation table
An infrared spectroscopy correlation table (or table of infrared absorption frequencies) is a list of absorption peaks and frequencies, typically reported in wavenumber, for common types of molecular bonds and functional groups. In physical and analytical chemistry, infrared spectroscopy (IR spectroscopy) is a technique used to identify chemical compounds based on the way infrared radiation is absorbed by the compound. The absorptions in this range do not apply only to bonds in organic molecules. IR spectroscopy is useful when it comes to analysis of inorganic compounds (such as metal complexes or fluoromanganates) as well. Group frequencies Tables of vibrational transitions of stable and transient molecules are also available. See also Applied spectroscopy Absorption spectroscopy References Infrared spectroscopy Chemistry-related lists
0.777419
0.992565
0.771639
Avogadro constant
The Avogadro constant, commonly denoted or , is an SI defining constant with an exact value of (reciprocal moles). It is defined as the number of constituent particles (usually molecules, atoms, ions, or ion pairs) per mole (SI unit) and used as a normalization factor in the amount of substance in a sample. In the SI dimensional analysis of measurement units, the dimension of the Avogadro constant is the reciprocal of amount of substance, denoted N−1. The Avogadro number, sometimes denoted , is the numeric value of the Avogadro constant (i.e., without a unit), namely the dimensionless number ; the value chosen based on the number of atoms in 12 grams of carbon-12 in alignment with the historical definition of a mole. The constant is named after the Italian physicist and chemist Amedeo Avogadro (1776–1856). The Avogadro constant is also the factor that converts the average mass of one particle, in grams, to the molar mass of the substance, in grams per mole (g/mol). That is, . The constant also relates the molar volume (the volume per mole) of a substance to the average volume nominally occupied by one of its particles, when both are expressed in the same units of volume. For example, since the molar volume of water in ordinary conditions is about , the volume occupied by one molecule of water is about , or about (cubic nanometres). For a crystalline substance, relates the volume of a crystal with one mole worth of repeating unit cells, to the volume of a single cell (both in the same units). Definition The Avogadro constant was historically derived from the old definition of the mole as the amount of substance in 12 grams of carbon-12 (12C); or, equivalently, the number of daltons in a gram, where the dalton is defined as of the mass of a 12C atom. By this old definition, the numerical value of the Avogadro constant in mol−1 (the Avogadro number) was a physical constant that had to be determined experimentally. The redefinition of the mole in 2019, as being the amount of substance containing exactly particles, meant that the mass of 1 mole of a substance is now exactly the product of the Avogadro number and the average mass of its particles. The dalton, however, is still defined as of the mass of a 12C atom, which must be determined experimentally and is known only with finite accuracy. The prior experiments that aimed to determine the Avogadro constant are now re-interpreted as measurements of the value in grams of the dalton. By the old definition of mole, the numerical value of one mole of a substance, expressed in grams, was precisely equal to the average mass of one particle in daltons. With the new definition, this numerical equivalence is no longer exact, as it is affected by the uncertainty of the value of the Dalton in SI units. However, it is still applicable for all practical purposes. For example, the average mass of one molecule of water is about 18.0153 daltons, and of one mole of water is about 18.0153 grams. Also, the Avogadro number is the approximate number of nucleons (protons and neutrons) in one gram of ordinary matter. In older literature, the Avogadro number was also denoted , although that conflicts with the symbol for number of particles in statistical mechanics. History Origin of the concept The Avogadro constant is named after the Italian scientist Amedeo Avogadro (1776–1856), who, in 1811, first proposed that the volume of a gas (at a given pressure and temperature) is proportional to the number of atoms or molecules regardless of the nature of the gas. Avogadro's hypothesis was popularized four years after his death by Stanislao Cannizzaro, who advocated Avogadro's work at the Karlsruhe Congress in 1860. The name Avogadro's number was coined in 1909 by the physicist Jean Perrin, who defined it as the number of molecules in exactly 32 grams of oxygen gas.The goal of this definition was to make the mass of a mole of a substance, in grams, be numerically equal to the mass of one molecule relative to the mass of the hydrogen atom; which, because of the law of definite proportions, was the natural unit of atomic mass, and was assumed to be of the atomic mass of oxygen. First measurements The value of Avogadro's number (not yet known by that name) was first obtained indirectly by Josef Loschmidt in 1865, by estimating the number of particles in a given volume of gas. This value, the number density of particles in an ideal gas, is now called the Loschmidt constant in his honor, and is related to the Avogadro constant, , by where is the pressure, is the gas constant, and is the absolute temperature. Because of this work, the symbol is sometimes used for the Avogadro constant, and, in German literature, that name may be used for both constants, distinguished only by the units of measurement. (However, should not be confused with the entirely different Loschmidt constant in English-language literature.) Perrin himself determined the Avogadro number by several different experimental methods. He was awarded the 1926 Nobel Prize in Physics, largely for this work. The electric charge per mole of electrons is a constant called the Faraday constant and has been known since 1834, when Michael Faraday published his works on electrolysis. In 1910, Robert Millikan with the help of Harvey Fletcher obtained the first measurement of the charge on an electron. Dividing the charge on a mole of electrons by the charge on a single electron provided a more accurate estimate of the Avogadro number. SI definition of 1971 In 1971, in its 14th conference, the International Bureau of Weights and Measures (BIPM) decided to regard the amount of substance as an independent dimension of measurement, with the mole as its base unit in the International System of Units (SI). Specifically, the mole was defined as an amount of a substance that contains as many elementary entities as there are atoms in of carbon-12 (12C). Thus, in particular, one mole of carbon-12 was exactly of the element. By this definition, one mole of any substance contained exactly as many elementary entities as one mole of any other substance. However, this number was a physical constant that had to be experimentally determined since it depended on the mass (in grams) of one atom of 12C, and therefore, it was known only to a limited number of decimal digits. The common rule of thumb that "one gram of matter contains nucleons" was exact for carbon-12, but slightly inexact for other elements and isotopes. In the same conference, the BIPM also named (the factor that converted moles into number of particles) the "Avogadro constant". However, the term "Avogadro number" continued to be used, especially in introductory works. As a consequence of this definition, was not a pure number, but had the metric dimension of reciprocal of amount of substance (mol-1). SI redefinition of 2019 In its 26th Conference, the BIPM adopted a different approach: effective 20 May 2019, it defined the Avogadro constant as the exact value , thus redefining the mole as exactly constituent particles of the substance under consideration. One consequence of this change is that the mass of a mole of 12C atoms is no longer exactly 0.012 kg. On the other hand, the dalton ( universal atomic mass unit) remains unchanged as of the mass of 12C. Thus, the molar mass constant remains very close to but no longer exactly equal to 1 g/mol, although the difference ( in relative terms, as of March 2019) is insignificant for all practical purposes. Connection to other constants The Avogadro constant is related to other physical constants and properties. It relates the molar gas constant and the Boltzmann constant , which in the SI is defined to be exactly :   It relates the Faraday constant and the elementary charge , which in the SI is defined as exactly :   It relates the molar mass constant and the atomic mass constant currently See also CODATA 2018 List of scientists whose names are used in physical constants Mole Day References External links 1996 definition of the Avogadro constant from the IUPAC Compendium of Chemical Terminology ("Gold Book") Some Notes on Avogadro's Number, (historical notes) An Exact Value for Avogadro's Number – American Scientist Avogadro and molar Planck constants for the redefinition of the kilogram Scanned version of "Two hypothesis of Avogadro", 1811 Avogadro's article, on BibNum Amount of substance Fundamental constants Physical constants Units of amount
0.772254
0.999198
0.771635
Geomorphology
Geomorphology (from Ancient Greek: , , 'earth'; , , 'form'; and , , 'study') is the scientific study of the origin and evolution of topographic and bathymetric features generated by physical, chemical or biological processes operating at or near Earth's surface. Geomorphologists seek to understand why landscapes look the way they do, to understand landform and terrain history and dynamics and to predict changes through a combination of field observations, physical experiments and numerical modeling. Geomorphologists work within disciplines such as physical geography, geology, geodesy, engineering geology, archaeology, climatology, and geotechnical engineering. This broad base of interests contributes to many research styles and interests within the field. Overview Earth's surface is modified by a combination of surface processes that shape landscapes, and geologic processes that cause tectonic uplift and subsidence, and shape the coastal geography. Surface processes comprise the action of water, wind, ice, wildfire, and life on the surface of the Earth, along with chemical reactions that form soils and alter material properties, the stability and rate of change of topography under the force of gravity, and other factors, such as (in the very recent past) human alteration of the landscape. Many of these factors are strongly mediated by climate. Geologic processes include the uplift of mountain ranges, the growth of volcanoes, isostatic changes in land surface elevation (sometimes in response to surface processes), and the formation of deep sedimentary basins where the surface of the Earth drops and is filled with material eroded from other parts of the landscape. The Earth's surface and its topography therefore are an intersection of climatic, hydrologic, and biologic action with geologic processes, or alternatively stated, the intersection of the Earth's lithosphere with its hydrosphere, atmosphere, and biosphere. The broad-scale topographies of the Earth illustrate this intersection of surface and subsurface action. Mountain belts are uplifted due to geologic processes. Denudation of these high uplifted regions produces sediment that is transported and deposited elsewhere within the landscape or off the coast. On progressively smaller scales, similar ideas apply, where individual landforms evolve in response to the balance of additive processes (uplift and deposition) and subtractive processes (subsidence and erosion). Often, these processes directly affect each other: ice sheets, water, and sediment are all loads that change topography through flexural isostasy. Topography can modify the local climate, for example through orographic precipitation, which in turn modifies the topography by changing the hydrologic regime in which it evolves. Many geomorphologists are particularly interested in the potential for feedbacks between climate and tectonics, mediated by geomorphic processes. In addition to these broad-scale questions, geomorphologists address issues that are more specific or more local. Glacial geomorphologists investigate glacial deposits such as moraines, eskers, and proglacial lakes, as well as glacial erosional features, to build chronologies of both small glaciers and large ice sheets and understand their motions and effects upon the landscape. Fluvial geomorphologists focus on rivers, how they transport sediment, migrate across the landscape, cut into bedrock, respond to environmental and tectonic changes, and interact with humans. Soils geomorphologists investigate soil profiles and chemistry to learn about the history of a particular landscape and understand how climate, biota, and rock interact. Other geomorphologists study how hillslopes form and change. Still others investigate the relationships between ecology and geomorphology. Because geomorphology is defined to comprise everything related to the surface of the Earth and its modification, it is a broad field with many facets. Geomorphologists use a wide range of techniques in their work. These may include fieldwork and field data collection, the interpretation of remotely sensed data, geochemical analyses, and the numerical modelling of the physics of landscapes. Geomorphologists may rely on geochronology, using dating methods to measure the rate of changes to the surface. Terrain measurement techniques are vital to quantitatively describe the form of the Earth's surface, and include differential GPS, remotely sensed digital terrain models and laser scanning, to quantify, study, and to generate illustrations and maps. Practical applications of geomorphology include hazard assessment (such as landslide prediction and mitigation), river control and stream restoration, and coastal protection. Planetary geomorphology studies landforms on other terrestrial planets such as Mars. Indications of effects of wind, fluvial, glacial, mass wasting, meteor impact, tectonics and volcanic processes are studied. This effort not only helps better understand the geologic and atmospheric history of those planets but also extends geomorphological study of the Earth. Planetary geomorphologists often use Earth analogues to aid in their study of surfaces of other planets. History Other than some notable exceptions in antiquity, geomorphology is a relatively young science, growing along with interest in other aspects of the earth sciences in the mid-19th century. This section provides a very brief outline of some of the major figures and events in its development. Ancient geomorphology The study of landforms and the evolution of the Earth's surface can be dated back to scholars of Classical Greece. In the 5th century BC, Greek historian Herodotus argued from observations of soils that the Nile delta was actively growing into the Mediterranean Sea, and estimated its age. In the 4th century BC, Greek philosopher Aristotle speculated that due to sediment transport into the sea, eventually those seas would fill while the land lowered. He claimed that this would mean that land and water would eventually swap places, whereupon the process would begin again in an endless cycle. The Encyclopedia of the Brethren of Purity published in Arabic at Basra during the 10th century also discussed the cyclical changing positions of land and sea with rocks breaking down and being washed into the sea, their sediment eventually rising to form new continents. The medieval Persian Muslim scholar Abū Rayhān al-Bīrūnī (973–1048), after observing rock formations at the mouths of rivers, hypothesized that the Indian Ocean once covered all of India. In his De Natura Fossilium of 1546, German metallurgist and mineralogist Georgius Agricola (1494–1555) wrote about erosion and natural weathering. Another early theory of geomorphology was devised by Song dynasty Chinese scientist and statesman Shen Kuo (1031–1095). This was based on his observation of marine fossil shells in a geological stratum of a mountain hundreds of miles from the Pacific Ocean. Noticing bivalve shells running in a horizontal span along the cut section of a cliffside, he theorized that the cliff was once the pre-historic location of a seashore that had shifted hundreds of miles over the centuries. He inferred that the land was reshaped and formed by soil erosion of the mountains and by deposition of silt, after observing strange natural erosions of the Taihang Mountains and the Yandang Mountain near Wenzhou. Furthermore, he promoted the theory of gradual climate change over centuries of time once ancient petrified bamboos were found to be preserved underground in the dry, northern climate zone of Yanzhou, which is now modern day Yan'an, Shaanxi province. Previous Chinese authors also presented ideas about changing landforms. Scholar-official Du Yu (222–285) of the Western Jin dynasty predicted that two monumental stelae recording his achievements, one buried at the foot of a mountain and the other erected at the top, would eventually change their relative positions over time as would hills and valleys. Daoist alchemist Ge Hong (284–364) created a fictional dialogue where the immortal Magu explained that the territory of the East China Sea was once a land filled with mulberry trees. Early modern geomorphology The term geomorphology seems to have been first used by Laumann in an 1858 work written in German. Keith Tinkler has suggested that the word came into general use in English, German and French after John Wesley Powell and W. J. McGee used it during the International Geological Conference of 1891. John Edward Marr in his The Scientific Study of Scenery considered his book as, 'an Introductory Treatise on Geomorphology, a subject which has sprung from the union of Geology and Geography'. An early popular geomorphic model was the geographical cycle or cycle of erosion model of broad-scale landscape evolution developed by William Morris Davis between 1884 and 1899. It was an elaboration of the uniformitarianism theory that had first been proposed by James Hutton (1726–1797). With regard to valley forms, for example, uniformitarianism posited a sequence in which a river runs through a flat terrain, gradually carving an increasingly deep valley, until the side valleys eventually erode, flattening the terrain again, though at a lower elevation. It was thought that tectonic uplift could then start the cycle over. In the decades following Davis's development of this idea, many of those studying geomorphology sought to fit their findings into this framework, known today as "Davisian". Davis's ideas are of historical importance, but have been largely superseded today, mainly due to their lack of predictive power and qualitative nature. In the 1920s, Walther Penck developed an alternative model to Davis's. Penck thought that landform evolution was better described as an alternation between ongoing processes of uplift and denudation, as opposed to Davis's model of a single uplift followed by decay. He also emphasised that in many landscapes slope evolution occurs by backwearing of rocks, not by Davisian-style surface lowering, and his science tended to emphasise surface process over understanding in detail the surface history of a given locality. Penck was German, and during his lifetime his ideas were at times rejected vigorously by the English-speaking geomorphology community. His early death, Davis' dislike for his work, and his at-times-confusing writing style likely all contributed to this rejection. Both Davis and Penck were trying to place the study of the evolution of the Earth's surface on a more generalized, globally relevant footing than it had been previously. In the early 19th century, authors – especially in Europe – had tended to attribute the form of landscapes to local climate, and in particular to the specific effects of glaciation and periglacial processes. In contrast, both Davis and Penck were seeking to emphasize the importance of evolution of landscapes through time and the generality of the Earth's surface processes across different landscapes under different conditions. During the early 1900s, the study of regional-scale geomorphology was termed "physiography". Physiography later was considered to be a contraction of "physical" and "geography", and therefore synonymous with physical geography, and the concept became embroiled in controversy surrounding the appropriate concerns of that discipline. Some geomorphologists held to a geological basis for physiography and emphasized a concept of physiographic regions while a conflicting trend among geographers was to equate physiography with "pure morphology", separated from its geological heritage. In the period following World War II, the emergence of process, climatic, and quantitative studies led to a preference by many earth scientists for the term "geomorphology" in order to suggest an analytical approach to landscapes rather than a descriptive one. Climatic geomorphology During the age of New Imperialism in the late 19th century European explorers and scientists traveled across the globe bringing descriptions of landscapes and landforms. As geographical knowledge increased over time these observations were systematized in a search for regional patterns. Climate emerged thus as prime factor for explaining landform distribution at a grand scale. The rise of climatic geomorphology was foreshadowed by the work of Wladimir Köppen, Vasily Dokuchaev and Andreas Schimper. William Morris Davis, the leading geomorphologist of his time, recognized the role of climate by complementing his "normal" temperate climate cycle of erosion with arid and glacial ones. Nevertheless, interest in climatic geomorphology was also a reaction against Davisian geomorphology that was by the mid-20th century considered both un-innovative and dubious. Early climatic geomorphology developed primarily in continental Europe while in the English-speaking world the tendency was not explicit until L.C. Peltier's 1950 publication on a periglacial cycle of erosion. Climatic geomorphology was criticized in a 1969 review article by process geomorphologist D.R. Stoddart. The criticism by Stoddart proved "devastating" sparking a decline in the popularity of climatic geomorphology in the late 20th century. Stoddart criticized climatic geomorphology for applying supposedly "trivial" methodologies in establishing landform differences between morphoclimatic zones, being linked to Davisian geomorphology and by allegedly neglecting the fact that physical laws governing processes are the same across the globe. In addition some conceptions of climatic geomorphology, like that which holds that chemical weathering is more rapid in tropical climates than in cold climates proved to not be straightforwardly true. Quantitative and process geomorphology Geomorphology was started to be put on a solid quantitative footing in the middle of the 20th century. Following the early work of Grove Karl Gilbert around the turn of the 20th century, a group of mainly American natural scientists, geologists and hydraulic engineers including William Walden Rubey, Ralph Alger Bagnold, Hans Albert Einstein, Frank Ahnert, John Hack, Luna Leopold, A. Shields, Thomas Maddock, Arthur Strahler, Stanley Schumm, and Ronald Shreve began to research the form of landscape elements such as rivers and hillslopes by taking systematic, direct, quantitative measurements of aspects of them and investigating the scaling of these measurements. These methods began to allow prediction of the past and future behavior of landscapes from present observations, and were later to develop into the modern trend of a highly quantitative approach to geomorphic problems. Many groundbreaking and widely cited early geomorphology studies appeared in the Bulletin of the Geological Society of America, and received only few citations prior to 2000 (they are examples of "sleeping beauties") when a marked increase in quantitative geomorphology research occurred. Quantitative geomorphology can involve fluid dynamics and solid mechanics, geomorphometry, laboratory studies, field measurements, theoretical work, and full landscape evolution modeling. These approaches are used to understand weathering and the formation of soils, sediment transport, landscape change, and the interactions between climate, tectonics, erosion, and deposition. In Sweden Filip Hjulström's doctoral thesis, "The River Fyris" (1935), contained one of the first quantitative studies of geomorphological processes ever published. His students followed in the same vein, making quantitative studies of mass transport (Anders Rapp), fluvial transport (Åke Sundborg), delta deposition (Valter Axelsson), and coastal processes (John O. Norrman). This developed into "the Uppsala School of Physical Geography". Contemporary geomorphology Today, the field of geomorphology encompasses a very wide range of different approaches and interests. Modern researchers aim to draw out quantitative "laws" that govern Earth surface processes, but equally, recognize the uniqueness of each landscape and environment in which these processes operate. Particularly important realizations in contemporary geomorphology include: 1) that not all landscapes can be considered as either "stable" or "perturbed", where this perturbed state is a temporary displacement away from some ideal target form. Instead, dynamic changes of the landscape are now seen as an essential part of their nature. 2) that many geomorphic systems are best understood in terms of the stochasticity of the processes occurring in them, that is, the probability distributions of event magnitudes and return times. This in turn has indicated the importance of chaotic determinism to landscapes, and that landscape properties are best considered statistically. The same processes in the same landscapes do not always lead to the same end results. According to Karna Lidmar-Bergström, regional geography is since the 1990s no longer accepted by mainstream scholarship as a basis for geomorphological studies. Albeit having its importance diminished, climatic geomorphology continues to exist as field of study producing relevant research. More recently concerns over global warming have led to a renewed interest in the field. Despite considerable criticism, the cycle of erosion model has remained part of the science of geomorphology. The model or theory has never been proved wrong, but neither has it been proven. The inherent difficulties of the model have instead made geomorphological research to advance along other lines. In contrast to its disputed status in geomorphology, the cycle of erosion model is a common approach used to establish denudation chronologies, and is thus an important concept in the science of historical geology. While acknowledging its shortcomings, modern geomorphologists Andrew Goudie and Karna Lidmar-Bergström have praised it for its elegance and pedagogical value respectively. Processes Geomorphically relevant processes generally fall into (1) the production of regolith by weathering and erosion, (2) the transport of that material, and (3) its eventual deposition. Primary surface processes responsible for most topographic features include wind, waves, chemical dissolution, mass wasting, groundwater movement, surface water flow, glacial action, tectonism, and volcanism. Other more exotic geomorphic processes might include periglacial (freeze-thaw) processes, salt-mediated action, changes to the seabed caused by marine currents, seepage of fluids through the seafloor or extraterrestrial impact. Aeolian processes Aeolian processes pertain to the activity of the winds and more specifically, to the winds' ability to shape the surface of the Earth. Winds may erode, transport, and deposit materials, and are effective agents in regions with sparse vegetation and a large supply of fine, unconsolidated sediments. Although water and mass flow tend to mobilize more material than wind in most environments, aeolian processes are important in arid environments such as deserts. Biological processes The interaction of living organisms with landforms, or biogeomorphologic processes, can be of many different forms, and is probably of profound importance for the terrestrial geomorphic system as a whole. Biology can influence very many geomorphic processes, ranging from biogeochemical processes controlling chemical weathering, to the influence of mechanical processes like burrowing and tree throw on soil development, to even controlling global erosion rates through modulation of climate through carbon dioxide balance. Terrestrial landscapes in which the role of biology in mediating surface processes can be definitively excluded are extremely rare, but may hold important information for understanding the geomorphology of other planets, such as Mars. Fluvial processes Rivers and streams are not only conduits of water, but also of sediment. The water, as it flows over the channel bed, is able to mobilize sediment and transport it downstream, either as bed load, suspended load or dissolved load. The rate of sediment transport depends on the availability of sediment itself and on the river's discharge. Rivers are also capable of eroding into rock and forming new sediment, both from their own beds and also by coupling to the surrounding hillslopes. In this way, rivers are thought of as setting the base level for large-scale landscape evolution in nonglacial environments. Rivers are key links in the connectivity of different landscape elements. As rivers flow across the landscape, they generally increase in size, merging with other rivers. The network of rivers thus formed is a drainage system. These systems take on four general patterns: dendritic, radial, rectangular, and trellis. Dendritic happens to be the most common, occurring when the underlying stratum is stable (without faulting). Drainage systems have four primary components: drainage basin, alluvial valley, delta plain, and receiving basin. Some geomorphic examples of fluvial landforms are alluvial fans, oxbow lakes, and fluvial terraces. Glacial processes Glaciers, while geographically restricted, are effective agents of landscape change. The gradual movement of ice down a valley causes abrasion and plucking of the underlying rock. Abrasion produces fine sediment, termed glacial flour. The debris transported by the glacier, when the glacier recedes, is termed a moraine. Glacial erosion is responsible for U-shaped valleys, as opposed to the V-shaped valleys of fluvial origin. The way glacial processes interact with other landscape elements, particularly hillslope and fluvial processes, is an important aspect of Plio-Pleistocene landscape evolution and its sedimentary record in many high mountain environments. Environments that have been relatively recently glaciated but are no longer may still show elevated landscape change rates compared to those that have never been glaciated. Nonglacial geomorphic processes which nevertheless have been conditioned by past glaciation are termed paraglacial processes. This concept contrasts with periglacial processes, which are directly driven by formation or melting of ice or frost. Hillslope processes Soil, regolith, and rock move downslope under the force of gravity via creep, slides, flows, topples, and falls. Such mass wasting occurs on both terrestrial and submarine slopes, and has been observed on Earth, Mars, Venus, Titan and Iapetus. Ongoing hillslope processes can change the topology of the hillslope surface, which in turn can change the rates of those processes. Hillslopes that steepen up to certain critical thresholds are capable of shedding extremely large volumes of material very quickly, making hillslope processes an extremely important element of landscapes in tectonically active areas. On the Earth, biological processes such as burrowing or tree throw may play important roles in setting the rates of some hillslope processes. Igneous processes Both volcanic (eruptive) and plutonic (intrusive) igneous processes can have important impacts on geomorphology. The action of volcanoes tends to rejuvenize landscapes, covering the old land surface with lava and tephra, releasing pyroclastic material and forcing rivers through new paths. The cones built by eruptions also build substantial new topography, which can be acted upon by other surface processes. Plutonic rocks intruding then solidifying at depth can cause both uplift or subsidence of the surface, depending on whether the new material is denser or less dense than the rock it displaces. Tectonic processes Tectonic effects on geomorphology can range from scales of millions of years to minutes or less. The effects of tectonics on landscape are heavily dependent on the nature of the underlying bedrock fabric that more or less controls what kind of local morphology tectonics can shape. Earthquakes can, in terms of minutes, submerge large areas of land forming new wetlands. Isostatic rebound can account for significant changes over hundreds to thousands of years, and allows erosion of a mountain belt to promote further erosion as mass is removed from the chain and the belt uplifts. Long-term plate tectonic dynamics give rise to orogenic belts, large mountain chains with typical lifetimes of many tens of millions of years, which form focal points for high rates of fluvial and hillslope processes and thus long-term sediment production. Features of deeper mantle dynamics such as plumes and delamination of the lower lithosphere have also been hypothesised to play important roles in the long term (> million year), large scale (thousands of km) evolution of the Earth's topography (see dynamic topography). Both can promote surface uplift through isostasy as hotter, less dense, mantle rocks displace cooler, denser, mantle rocks at depth in the Earth. Marine processes Marine processes are those associated with the action of waves, marine currents and seepage of fluids through the seafloor. Mass wasting and submarine landsliding are also important processes for some aspects of marine geomorphology. Because ocean basins are the ultimate sinks for a large fraction of terrestrial sediments, depositional processes and their related forms (e.g., sediment fans, deltas) are particularly important as elements of marine geomorphology. Overlap with other fields There is a considerable overlap between geomorphology and other fields. Deposition of material is extremely important in sedimentology. Weathering is the chemical and physical disruption of earth materials in place on exposure to atmospheric or near surface agents, and is typically studied by soil scientists and environmental chemists, but is an essential component of geomorphology because it is what provides the material that can be moved in the first place. Civil and environmental engineers are concerned with erosion and sediment transport, especially related to canals, slope stability (and natural hazards), water quality, coastal environmental management, transport of contaminants, and stream restoration. Glaciers can cause extensive erosion and deposition in a short period of time, making them extremely important entities in the high latitudes and meaning that they set the conditions in the headwaters of mountain-born streams; glaciology therefore is important in geomorphology. See also Bioerosion Biogeology Biogeomorphology Biorhexistasy British Society for Geomorphology Coastal biogeomorphology Coastal erosion Concepts and Techniques in Modern Geography Drainage system (geomorphology) Erosion prediction Geologic modelling Geomorphometry Geotechnics Hack's law Hydrologic modeling, behavioral modeling in hydrology List of landforms Orogeny Physiographic regions of the world Sediment transport Soil morphology Soils retrogression and degradation Stream capture Thermochronology References Further reading Ialenti, Vincent. "Envisioning Landscapes of Our Very Distant Future" NPR Cosmos & Culture. 9/2014. Bierman, P.R.; Montgomery, D.R. Key Concepts in Geomorphology. New York: W. H. Freeman, 2013. . Ritter, D.F.; Kochel, R.C.; Miller, J.R.. Process Geomorphology. London: Waveland Pr Inc, 2011. . Hargitai H., Page D., Canon-Tapia E. and Rodrigue C.M..; Classification and Characterization of Planetary Landforms. in: Hargitai H, Kereszturi Á, eds, Encyclopedia of Planetary Landforms. Cham: Springer 2015 External links The Geographical Cycle, or the Cycle of Erosion (1899) Geomorphology from Space (NASA) British Society for Geomorphology Earth sciences Geology Geological processes Gravity Physical geography Planetary science Seismology Topography
0.774685
0.996041
0.771618
Molecular biophysics
Molecular biophysics is a rapidly evolving interdisciplinary area of research that combines concepts in physics, chemistry, engineering, mathematics and biology. It seeks to understand biomolecular systems and explain biological function in terms of molecular structure, structural organization, and dynamic behaviour at various levels of complexity (from single molecules to supramolecular structures, viruses and small living systems). This discipline covers topics such as the measurement of molecular forces, molecular associations, allosteric interactions, Brownian motion, and cable theory. Additional areas of study can be found on Outline of Biophysics. The discipline has required development of specialized equipment and procedures capable of imaging and manipulating minute living structures, as well as novel experimental approaches. Overview Molecular biophysics typically addresses biological questions similar to those in biochemistry and molecular biology, seeking to find the physical underpinnings of biomolecular phenomena. Scientists in this field conduct research concerned with understanding the interactions between the various systems of a cell, including the interactions between DNA, RNA and protein biosynthesis, as well as how these interactions are regulated. A great variety of techniques are used to answer these questions. Fluorescent imaging techniques, as well as electron microscopy, X-ray crystallography, NMR spectroscopy, atomic force microscopy (AFM) and small-angle scattering (SAS) both with X-rays and neutrons (SAXS/SANS) are often used to visualize structures of biological significance. Protein dynamics can be observed by neutron spin echo spectroscopy. Conformational change in structure can be measured using techniques such as dual polarisation interferometry, circular dichroism, SAXS and SANS. Direct manipulation of molecules using optical tweezers or AFM, can also be used to monitor biological events where forces and distances are at the nanoscale. Molecular biophysicists often consider complex biological events as systems of interacting entities which can be understood e.g. through statistical mechanics, thermodynamics and chemical kinetics. By drawing knowledge and experimental techniques from a wide variety of disciplines, biophysicists are often able to directly observe, model or even manipulate the structures and interactions of individual molecules or complexes of molecules. Areas of Research Computational biology Computational biology involves the development and application of data-analytical and theoretical methods, mathematical modeling and computational simulation techniques to the study of biological, ecological, behavioral, and social systems. The field is broadly defined and includes foundations in biology, applied mathematics, statistics, biochemistry, chemistry, biophysics, molecular biology, genetics, genomics, computer science and evolution. Computational biology has become an important part of developing emerging technologies for the field of biology. Molecular modelling encompasses all methods, theoretical and computational, used to model or mimic the behaviour of molecules. The methods are used in the fields of computational chemistry, drug design, computational biology and materials science to study molecular systems ranging from small chemical systems to large biological molecules and material assemblies. Membrane biophysics Membrane biophysics is the study of biological membrane structure and function using physical, computational, mathematical, and biophysical methods. A combination of these methods can be used to create phase diagrams of different types of membranes, which yields information on thermodynamic behavior of a membrane and its components. As opposed to membrane biology, membrane biophysics focuses on quantitative information and modeling of various membrane phenomena, such as lipid raft formation, rates of lipid and cholesterol flip-flop, protein-lipid coupling, and the effect of bending and elasticity functions of membranes on inter-cell connections. Motor proteins Motor proteins are a class of molecular motors that can move along the cytoplasm of animal cells. They convert chemical energy into mechanical work by the hydrolysis of ATP. A good example is the muscle protein myosin which "motors" the contraction of muscle fibers in animals. Motor proteins are the driving force behind most active transport of proteins and vesicles in the cytoplasm. Kinesins and cytoplasmic dyneins play essential roles in intracellular transport such as axonal transport and in the formation of the spindle apparatus and the separation of the chromosomes during mitosis and meiosis. Axonemal dynein, found in cilia and flagella, is crucial to cell motility, for example in spermatozoa, and fluid transport, for example in trachea. Some biological machines are motor proteins, such as myosin, which is responsible for muscle contraction, kinesin, which moves cargo inside cells away from the nucleus along microtubules, and dynein, which moves cargo inside cells towards the nucleus and produces the axonemal beating of motile cilia and flagella. "[I]n effect, the [motile cilium] is a nanomachine composed of perhaps over 600 proteins in molecular complexes, many of which also function independently as nanomachines...Flexible linkers allow the mobile protein domains connected by them to recruit their binding partners and induce long-range allostery via protein domain dynamics. Other biological machines are responsible for energy production, for example ATP synthase which harnesses energy from proton gradients across membranes to drive a turbine-like motion used to synthesise ATP, the energy currency of a cell. Still other machines are responsible for gene expression, including DNA polymerases for replicating DNA, RNA polymerases for producing mRNA, the spliceosome for removing introns, and the ribosome for synthesising proteins. These machines and their nanoscale dynamics are far more complex than any molecular machines that have yet been artificially constructed. These molecular motors are the essential agents of movement in living organisms. In general terms, a motor is a device that consumes energy in one form and converts it into motion or mechanical work; for example, many protein-based molecular motors harness the chemical free energy released by the hydrolysis of ATP in order to perform mechanical work. In terms of energetic efficiency, this type of motor can be superior to currently available man-made motors. Richard Feynman theorized about the future of nanomedicine. He wrote about the idea of a medical use for biological machines. Feynman and Albert Hibbs suggested that certain repair machines might one day be reduced in size to the point that it would be possible to (as Feynman put it) "swallow the doctor". The idea was discussed in Feynman's 1959 essay "There's Plenty of Room at the Bottom". These biological machines might have applications in nanomedicine. For example, they could be used to identify and destroy cancer cells. Molecular nanotechnology is a speculative subfield of nanotechnology regarding the possibility of engineering molecular assemblers, biological machines which could re-order matter at a molecular or atomic scale. Nanomedicine would make use of these nanorobots, introduced into the body, to repair or detect damages and infections. Molecular nanotechnology is highly theoretical, seeking to anticipate what inventions nanotechnology might yield and to propose an agenda for future inquiry. The proposed elements of molecular nanotechnology, such as molecular assemblers and nanorobots are far beyond current capabilities. Protein folding Protein folding is the physical process by which a protein chain acquires its native 3-dimensional structure, a conformation that is usually biologically functional, in an expeditious and reproducible manner. It is the physical process by which a polypeptide folds into its characteristic and functional three-dimensional structure from a random coil. Each protein exists as an unfolded polypeptide or random coil when translated from a sequence of mRNA to a linear chain of amino acids. This polypeptide lacks any stable (long-lasting) three-dimensional structure (the left hand side of the first figure). As the polypeptide chain is being synthesized by a ribosome, the linear chain begins to fold into its three-dimensional structure. Folding begins to occur even during the translation of the polypeptide chain. Amino acids interact with each other to produce a well-defined three-dimensional structure, the folded protein (the right-hand side of the figure), known as the native state. The resulting three-dimensional structure is determined by the amino acid sequence or primary structure (Anfinsen's dogma). Protein structure determination As the three-dimensional structure of proteins brings with it an understanding of its function and biological context, there is great effort placed in observing the structures of proteins. X-ray crystallography was the primary method used in the 20th century to solve the structures of proteins in their crystalline form. Ever since the early 2000s, cryogenic electron microscopy has been used to solve the structures of proteins closer to their native state, as well as observing cellular structures. Protein structure prediction Protein structure prediction is the inference of the three-dimensional structure of a protein from its amino acid sequence—that is, the prediction of its folding and its secondary and tertiary structure from its primary structure. Structure prediction is fundamentally different from the inverse problem of protein design. Protein structure prediction is one of the most important goals pursued by bioinformatics and theoretical chemistry; it is highly important in medicine, in drug design, biotechnology and in the design of novel enzymes). Every two years, the performance of current methods is assessed in the CASP experiment (Critical Assessment of Techniques for Protein Structure Prediction). A continuous evaluation of protein structure prediction web servers is performed by the community project CAMEO3D. The challenge in predicting protein structures is that there lacks a physical model that can fully predict protein tertiary structures from their amino acid sequence. This problem is known as the de novo protein structure prediction problem and is one of the great problems of modern science. AlphaFold, an artificial intelligence program, is able to accurately predict the structures of proteins with genetic homology to other proteins that have been previously solved. Though, this is not a solution to the de novo problem, as it relies on a database of prior data which results in it always being biased. The solution to the de novo protein structure prediction problem must be a purely physical model that will simulate the protein folding in its native environment, resulting in the in silico observation of protein structures and dynamics that were never previously observed. Spectroscopy Spectroscopic techniques like NMR, spin label electron spin resonance, Raman spectroscopy, infrared spectroscopy, circular dichroism, and so on have been widely used to understand structural dynamics of important biomolecules and intermolecular interactions. See also Small angle scattering Biophysical chemistry Biophysics Biophysical Society Cryo-electron microscopy (cryo-EM) Dual-polarization interferometry and circular dichroism Electron paramagnetic resonance (EPR) European Biophysical Societies' Association Index of biophysics articles List of publications in biology – Biophysics List of publications in physics – Biophysics List of biophysicists Outline of biophysics Mass spectrometry Medical biophysics Membrane biophysics Multiangle light scattering Neurophysics Nuclear magnetic resonance spectroscopy of proteins (NMR) Physiomics Proteolysis Ultrafast laser spectroscopy Virophysics Macromolecular crystallography References
0.797751
0.967232
0.77161
Electrophysiology
Electrophysiology (from Greek , ēlektron, "amber" [see the etymology of "electron"]; , physis, "nature, origin"; and , -logia) is the branch of physiology that studies the electrical properties of biological cells and tissues. It involves measurements of voltage changes or electric current or manipulations on a wide variety of scales from single ion channel proteins to whole organs like the heart. In neuroscience, it includes measurements of the electrical activity of neurons, and, in particular, action potential activity. Recordings of large-scale electric signals from the nervous system, such as electroencephalography, may also be referred to as electrophysiological recordings. They are useful for electrodiagnosis and monitoring. Definition and scope Classical electrophysiological techniques Principle and mechanisms Electrophysiology is the branch of physiology that pertains broadly to the flow of ions (ion current) in biological tissues and, in particular, to the electrical recording techniques that enable the measurement of this flow. Classical electrophysiology techniques involve placing electrodes into various preparations of biological tissue. The principal types of electrodes are: Simple solid conductors, such as discs and needles (singles or arrays, often insulated except for the tip), Tracings on printed circuit boards or flexible polymers, also insulated except for the tip, and Hollow, often elongated or 'pulled', tubes filled with an electrolyte, such as glass pipettes filled with potassium chloride solution or another electrolyte solution. The principal preparations include: living organisms (example in insects), excised tissue (acute or cultured), dissociated cells from excised tissue (acute or cultured), artificially grown cells or tissues, or hybrids of the above. Neuronal electrophysiology is the study of electrical properties of biological cells and tissues within the nervous system. With neuronal electrophysiology doctors and specialists can determine how neuronal disorders happen, by looking at the individual's brain activity. Activity such as which portions of the brain light up during any situations encountered. If an electrode is small enough (micrometers) in diameter, then the electrophysiologist may choose to insert the tip into a single cell. Such a configuration allows direct observation and intracellular recording of the intracellular electrical activity of a single cell. However, this invasive setup reduces the life of the cell and causes a leak of substances across the cell membrane. Intracellular activity may also be observed using a specially formed (hollow) glass pipette containing an electrolyte. In this technique, the microscopic pipette tip is pressed against the cell membrane, to which it tightly adheres by an interaction between glass and lipids of the cell membrane. The electrolyte within the pipette may be brought into fluid continuity with the cytoplasm by delivering a pulse of negative pressure to the pipette in order to rupture the small patch of membrane encircled by the pipette rim (whole-cell recording). Alternatively, ionic continuity may be established by "perforating" the patch by allowing exogenous pore-forming agents within the electrolyte to insert themselves into the membrane patch (perforated patch recording). Finally, the patch may be left intact (patch recording). The electrophysiologist may choose not to insert the tip into a single cell. Instead, the electrode tip may be left in continuity with the extracellular space. If the tip is small enough, such a configuration may allow indirect observation and recording of action potentials from a single cell, termed single-unit recording. Depending on the preparation and precise placement, an extracellular configuration may pick up the activity of several nearby cells simultaneously, termed multi-unit recording. As electrode size increases, the resolving power decreases. Larger electrodes are sensitive only to the net activity of many cells, termed local field potentials. Still larger electrodes, such as uninsulated needles and surface electrodes used by clinical and surgical neurophysiologists, are sensitive only to certain types of synchronous activity within populations of cells numbering in the millions. Other classical electrophysiological techniques include single channel recording and amperometry. Electrographic modalities by body part Electrophysiological recording in general is sometimes called electrography (from electro- + -graphy, "electrical recording"), with the record thus produced being an electrogram. However, the word electrography has other senses (including electrophotography), and the specific types of electrophysiological recording are usually called by specific names, constructed on the pattern of electro- + [body part combining form] + -graphy (abbreviation ExG). Relatedly, the word electrogram (not being needed for those other senses) often carries the specific meaning of intracardiac electrogram, which is like an electrocardiogram but with some invasive leads (inside the heart) rather than only noninvasive leads (on the skin). Electrophysiological recording for clinical diagnostic purposes is included within the category of electrodiagnostic testing. The various "ExG" modes are as follows: Optical electrophysiological techniques Optical electrophysiological techniques were created by scientists and engineers to overcome one of the main limitations of classical techniques. Classical techniques allow observation of electrical activity at approximately a single point within a volume of tissue. Classical techniques singularize a distributed phenomenon. Interest in the spatial distribution of bioelectric activity prompted development of molecules capable of emitting light in response to their electrical or chemical environment. Examples are voltage sensitive dyes and fluorescing proteins. After introducing one or more such compounds into tissue via perfusion, injection or gene expression, the 1 or 2-dimensional distribution of electrical activity may be observed and recorded. Intracellular recording Intracellular recording involves measuring voltage and/or current across the membrane of a cell. To make an intracellular recording, the tip of a fine (sharp) microelectrode must be inserted inside the cell, so that the membrane potential can be measured. Typically, the resting membrane potential of a healthy cell will be -60 to -80 mV, and during an action potential the membrane potential might reach +40 mV. In 1963, Alan Lloyd Hodgkin and Andrew Fielding Huxley won the Nobel Prize in Physiology or Medicine for their contribution to understanding the mechanisms underlying the generation of action potentials in neurons. Their experiments involved intracellular recordings from the giant axon of Atlantic squid (Loligo pealei), and were among the first applications of the "voltage clamp" technique. Today, most microelectrodes used for intracellular recording are glass micropipettes, with a tip diameter of < 1 micrometre, and a resistance of several megohms. The micropipettes are filled with a solution that has a similar ionic composition to the intracellular fluid of the cell. A chlorided silver wire inserted into the pipette connects the electrolyte electrically to the amplifier and signal processing circuit. The voltage measured by the electrode is compared to the voltage of a reference electrode, usually a silver chloride-coated silver wire in contact with the extracellular fluid around the cell. In general, the smaller the electrode tip, the higher its electrical resistance. So an electrode is a compromise between size (small enough to penetrate a single cell with minimum damage to the cell) and resistance (low enough so that small neuronal signals can be discerned from thermal noise in the electrode tip). Maintaining healthy brain slices is pivotal for successful electrophysiological recordings. The preparation of these slices is commonly achieved with tools such as the Compresstome vibratome, ensuring optimal conditions for accurate and reliable recordings. Nevertheless, even with the highest standards of tissue handling, slice preparation induces rapid and robust phenotype changes of the brain's major immune cells, microglia, which must be taken into consideration when using this model. Voltage clamp The voltage clamp technique allows an experimenter to "clamp" the cell potential at a chosen value. This makes it possible to measure how much ionic current crosses a cell's membrane at any given voltage. This is important because many of the ion channels in the membrane of a neuron are voltage-gated ion channels, which open only when the membrane voltage is within a certain range. Voltage clamp measurements of current are made possible by the near-simultaneous digital subtraction of transient capacitive currents that pass as the recording electrode and cell membrane are charged to alter the cell's potential. Current clamp The current clamp technique records the membrane potential by injecting current into a cell through the recording electrode. Unlike in the voltage clamp mode, where the membrane potential is held at a level determined by the experimenter, in "current clamp" mode the membrane potential is free to vary, and the amplifier records whatever voltage the cell generates on its own or as a result of stimulation. This technique is used to study how a cell responds when electric current enters a cell; this is important for instance for understanding how neurons respond to neurotransmitters that act by opening membrane ion channels. Most current-clamp amplifiers provide little or no amplification of the voltage changes recorded from the cell. The "amplifier" is actually an electrometer, sometimes referred to as a "unity gain amplifier"; its main purpose is to reduce the electrical load on the small signals (in the mV range) produced by cells so that they can be accurately recorded by low-impedance electronics. The amplifier increases the current behind the signal while decreasing the resistance over which that current passes. Consider this example based on Ohm's law: A voltage of 10 mV is generated by passing 10 nanoamperes of current across 1 MΩ of resistance. The electrometer changes this "high impedance signal" to a "low impedance signal" by using a voltage follower circuit. A voltage follower reads the voltage on the input (caused by a small current through a big resistor). It then instructs a parallel circuit that has a large current source behind it (the electrical mains) and adjusts the resistance of that parallel circuit to give the same output voltage, but across a lower resistance. Patch-clamp recording This technique was developed by Erwin Neher and Bert Sakmann who received the Nobel Prize in 1991. Conventional intracellular recording involves impaling a cell with a fine electrode; patch-clamp recording takes a different approach. A patch-clamp microelectrode is a micropipette with a relatively large tip diameter. The microelectrode is placed next to a cell, and gentle suction is applied through the microelectrode to draw a piece of the cell membrane (the 'patch') into the microelectrode tip; the glass tip forms a high resistance 'seal' with the cell membrane. This configuration is the "cell-attached" mode, and it can be used for studying the activity of the ion channels that are present in the patch of membrane. If more suction is now applied, the small patch of membrane in the electrode tip can be displaced, leaving the electrode sealed to the rest of the cell. This "whole-cell" mode allows very stable intracellular recording. A disadvantage (compared to conventional intracellular recording with sharp electrodes) is that the intracellular fluid of the cell mixes with the solution inside the recording electrode, and so some important components of the intracellular fluid can be diluted. A variant of this technique, the "perforated patch" technique, tries to minimize these problems. Instead of applying suction to displace the membrane patch from the electrode tip, it is also possible to make small holes on the patch with pore-forming agents so that large molecules such as proteins can stay inside the cell and ions can pass through the holes freely. Also the patch of membrane can be pulled away from the rest of the cell. This approach enables the membrane properties of the patch to be analyzed pharmacologically. Patch-clamp may also be combined with RNA sequencing in a technique known as patch-seq by extracting the cellular contents following recording in order to characterize the electrophysiological properties relationship to gene expression and cell-type. Sharp electrode recording In situations where one wants to record the potential inside the cell membrane with minimal effect on the ionic constitution of the intracellular fluid a sharp electrode can be used. These micropipettes (electrodes) are again like those for patch clamp pulled from glass capillaries, but the pore is much smaller so that there is very little ion exchange between the intracellular fluid and the electrolyte in the pipette. The electrical resistance of the micropipette electrode is reduced by filling with 2-4M KCl, rather than a salt concentration which mimics the intracellular ionic concentrations as used in patch clamping. Often the tip of the electrode is filled with various kinds of dyes like Lucifer yellow to fill the cells recorded from, for later confirmation of their morphology under a microscope. The dyes are injected by applying a positive or negative, DC or pulsed voltage to the electrodes depending on the polarity of the dye. Extracellular recording Single-unit recording An electrode introduced into the brain of a living animal will detect electrical activity that is generated by the neurons adjacent to the electrode tip. If the electrode is a microelectrode, with a tip size of about 1 micrometre, the electrode will usually detect the activity of at most one neuron. Recording in this way is in general called "single-unit" recording. The action potentials recorded are very much like the action potentials that are recorded intracellularly, but the signals are very much smaller (typically about 1 mV). Most recordings of the activity of single neurons in anesthetized and conscious animals are made in this way. Recordings of single neurons in living animals have provided important insights into how the brain processes information. For example, David Hubel and Torsten Wiesel recorded the activity of single neurons in the primary visual cortex of the anesthetized cat, and showed how single neurons in this area respond to very specific features of a visual stimulus. Hubel and Wiesel were awarded the Nobel Prize in Physiology or Medicine in 1981. To prepare the brain for such electrode insertion, delicate slicing devices like the compresstome vibratome, leica vibratome, microtome are often employed. These instruments aid in obtaining precise, thin brain sections necessary for electrode placement, enabling neuroscientists to target specific brain regions for recording. Multi-unit recording If the electrode tip is slightly larger, then the electrode might record the activity generated by several neurons. This type of recording is often called "multi-unit recording", and is often used in conscious animals to record changes in the activity in a discrete brain area during normal activity. Recordings from one or more such electrodes that are closely spaced can be used to identify the number of cells around it as well as which of the spikes come from which cell. This process is called spike sorting and is suitable in areas where there are identified types of cells with well defined spike characteristics. If the electrode tip is bigger still, in general the activity of individual neurons cannot be distinguished but the electrode will still be able to record a field potential generated by the activity of many cells. Field potentials Extracellular field potentials are local current sinks or sources that are generated by the collective activity of many cells. Usually, a field potential is generated by the simultaneous activation of many neurons by synaptic transmission. The diagram to the right shows hippocampal synaptic field potentials. At the right, the lower trace shows a negative wave that corresponds to a current sink caused by positive charges entering cells through postsynaptic glutamate receptors, while the upper trace shows a positive wave that is generated by the current that leaves the cell (at the cell body) to complete the circuit. For more information, see local field potential. Amperometry Amperometry uses a carbon electrode to record changes in the chemical composition of the oxidized components of a biological solution. Oxidation and reduction is accomplished by changing the voltage at the active surface of the recording electrode in a process known as "scanning". Because certain brain chemicals lose or gain electrons at characteristic voltages, individual species can be identified. Amperometry has been used for studying exocytosis in the nervous and endocrine systems. Many monoamine neurotransmitters; e.g., norepinephrine (noradrenalin), dopamine, and serotonin (5-HT) are oxidizable. The method can also be used with cells that do not secrete oxidizable neurotransmitters by "loading" them with 5-HT or dopamine. Planar patch clamp Planar patch clamp is a novel method developed for high throughput electrophysiology. Instead of positioning a pipette on an adherent cell, cell suspension is pipetted on a chip containing a microstructured aperture. A single cell is then positioned on the hole by suction and a tight connection (Gigaseal) is formed. The planar geometry offers a variety of advantages compared to the classical experiment: It allows for integration of microfluidics, which enables automatic compound application for ion channel screening. The system is accessible for optical or scanning probe techniques. Perfusion of the intracellular side can be performed. Other methods Solid-supported membrane (SSM)-based With this electrophysiological approach, proteoliposomes, membrane vesicles, or membrane fragments containing the channel or transporter of interest are adsorbed to a lipid monolayer painted over a functionalized electrode. This electrode consists of a glass support, a chromium layer, a gold layer, and an octadecyl mercaptane monolayer. Because the painted membrane is supported by the electrode, it is called a solid-supported membrane. Mechanical perturbations, which usually destroy a biological lipid membrane, do not influence the life-time of an SSM. The capacitive electrode (composed of the SSM and the absorbed vesicles) is so mechanically stable that solutions may be rapidly exchanged at its surface. This property allows the application of rapid substrate/ligand concentration jumps to investigate the electrogenic activity of the protein of interest, measured via capacitive coupling between the vesicles and the electrode. Bioelectric recognition assay (BERA) The bioelectric recognition assay (BERA) is a novel method for determination of various chemical and biological molecules by measuring changes in the membrane potential of cells immobilized in a gel matrix. Apart from the increased stability of the electrode-cell interface, immobilization preserves the viability and physiological functions of the cells. BERA is used primarily in biosensor applications in order to assay analytes that can interact with the immobilized cells by changing the cell membrane potential. In this way, when a positive sample is added to the sensor, a characteristic, "signature-like" change in electrical potential occurs. BERA is the core technology behind the recently launched pan-European FOODSCAN project, about pesticide and food risk assessment in Europe. BERA has been used for the detection of human viruses (hepatitis B and C viruses and herpes viruses), veterinary disease agents (foot and mouth disease virus, prions, and blue tongue virus), and plant viruses (tobacco and cucumber viruses) in a specific, rapid (1–2 minutes), reproducible, and cost-efficient fashion. The method has also been used for the detection of environmental toxins, such as pesticides and mycotoxins in food, and 2,4,6-trichloroanisole in cork and wine, as well as the determination of very low concentrations of the superoxide anion in clinical samples. A BERA sensor has two parts: The consumable biorecognition elements The electronic read-out device with embedded artificial intelligence. A recent advance is the development of a technique called molecular identification through membrane engineering (MIME). This technique allows for building cells with defined specificity for virtually any molecule of interest, by embedding thousands of artificial receptors into the cell membrane. Computational electrophysiology While not strictly constituting an experimental measurement, methods have been developed to examine the conductive properties of proteins and biomembranes in silico. These are mainly molecular dynamics simulations in which a model system like a lipid bilayer is subjected to an externally applied voltage. Studies using these setups have been able to study dynamical phenomena like electroporation of membranes and ion translocation by channels. The benefit of such methods is the high level of detail of the active conduction mechanism, given by the inherently high resolution and data density that atomistic simulation affords. There are significant drawbacks, given by the uncertainty of the legitimacy of the model and the computational cost of modeling systems that are large enough and over sufficient timescales to be considered reproducing the macroscopic properties of the systems themselves. While atomistic simulations may access timescales close to, or into the microsecond domain, this is still several orders of magnitude lower than even the resolution of experimental methods such as patch-clamping. Clinical electrophysiology Clinical electrophysiology is the study of how electrophysiological principles and technologies can be applied to human health. For example, clinical cardiac electrophysiology is the study of the electrical properties which govern heart rhythm and activity. Cardiac electrophysiology can be used to observe and treat disorders such as arrhythmia (irregular heartbeat). For example, a doctor may insert a catheter containing an electrode into the heart to record the heart muscle's electrical activity. Another example of clinical electrophysiology is clinical neurophysiology. In this medical specialty, doctors measure the electrical properties of the brain, spinal cord, and nerves. Scientists such as Duchenne de Boulogne (1806–1875) and Nathaniel A. Buchwald (1924–2006) are considered to have greatly advanced the field of neurophysiology, enabling its clinical applications. Clinical reporting guidelines Minimum Information (MI) standards or reporting guidelines specify the minimum amount of meta data (information) and data required to meet a specific aim or aims in a clinical study. The "Minimum Information about a Neuroscience investigation" (MINI) family of reporting guideline documents aims to provide a consistent set of guidelines in order to report an electrophysiology experiment. In practice a MINI module comprises a checklist of information that should be provided (for example about the protocols employed) when a data set is described for publication. See also References External links Book chapter on Planar Patch Clamp Ion channels Neuroimaging Neurophysiology Biophysics
0.774944
0.995666
0.771586
Histology
Histology, also known as microscopic anatomy or microanatomy, is the branch of biology that studies the microscopic anatomy of biological tissues. Histology is the microscopic counterpart to gross anatomy, which looks at larger structures visible without a microscope. Although one may divide microscopic anatomy into organology, the study of organs, histology, the study of tissues, and cytology, the study of cells, modern usage places all of these topics under the field of histology. In medicine, histopathology is the branch of histology that includes the microscopic identification and study of diseased tissue. In the field of paleontology, the term paleohistology refers to the histology of fossil organisms. Biological tissues Animal tissue classification There are four basic types of animal tissues: muscle tissue, nervous tissue, connective tissue, and epithelial tissue. All animal tissues are considered to be subtypes of these four principal tissue types (for example, blood is classified as connective tissue, since the blood cells are suspended in an extracellular matrix, the plasma). Plant tissue classification For plants, the study of their tissues falls under the field of plant anatomy, with the following four main types: Dermal tissue Vascular tissue Ground tissue Meristematic tissue Medical histology Histopathology is the branch of histology that includes the microscopic identification and study of diseased tissue. It is an important part of anatomical pathology and surgical pathology, as accurate diagnosis of cancer and other diseases often requires histopathological examination of tissue samples. Trained physicians, frequently licensed pathologists, perform histopathological examination and provide diagnostic information based on their observations. Occupations The field of histology that includes the preparation of tissues for microscopic examination is known as histotechnology. Job titles for the trained personnel who prepare histological specimens for examination are numerous and include histotechnicians, histotechnologists, histology technicians and technologists, medical laboratory technicians, and biomedical scientists. Sample preparation Most histological samples need preparation before microscopic observation; these methods depend on the specimen and method of observation. Fixation Chemical fixatives are used to preserve and maintain the structure of tissues and cells; fixation also hardens tissues which aids in cutting the thin sections of tissue needed for observation under the microscope. Fixatives generally preserve tissues (and cells) by irreversibly cross-linking proteins. The most widely used fixative for light microscopy is 10% neutral buffered formalin, or NBF (4% formaldehyde in phosphate buffered saline). For electron microscopy, the most commonly used fixative is glutaraldehyde, usually as a 2.5% solution in phosphate buffered saline. Other fixatives used for electron microscopy are osmium tetroxide or uranyl acetate. The main action of these aldehyde fixatives is to cross-link amino groups in proteins through the formation of methylene bridges (-CH2-), in the case of formaldehyde, or by C5H10 cross-links in the case of glutaraldehyde. This process, while preserving the structural integrity of the cells and tissue can damage the biological functionality of proteins, particularly enzymes. Formalin fixation leads to degradation of mRNA, miRNA, and DNA as well as denaturation and modification of proteins in tissues. However, extraction and analysis of nucleic acids and proteins from formalin-fixed, paraffin-embedded tissues is possible using appropriate protocols. Selection and trimming Selection is the choice of relevant tissue in cases where it is not necessary to put the entire original tissue mass through further processing. The remainder may remain fixed in case it needs to be examined at a later time. Trimming is the cutting of tissue samples in order to expose the relevant surfaces for later sectioning. It also creates tissue samples of appropriate size to fit into cassettes. Embedding Tissues are embedded in a harder medium both as a support and to allow the cutting of thin tissue slices. In general, water must first be removed from tissues (dehydration) and replaced with a medium that either solidifies directly, or with an intermediary fluid (clearing) that is miscible with the embedding media. Paraffin wax For light microscopy, paraffin wax is the most frequently used embedding material. Paraffin is immiscible with water, the main constituent of biological tissue, so it must first be removed in a series of dehydration steps. Samples are transferred through a series of progressively more concentrated ethanol baths, up to 100% ethanol to remove remaining traces of water. Dehydration is followed by a clearing agent (typically xylene although other environmental safe substitutes are in use) which removes the alcohol and is miscible with the wax, finally melted paraffin wax is added to replace the xylene and infiltrate the tissue. In most histology, or histopathology laboratories the dehydration, clearing, and wax infiltration are carried out in tissue processors which automate this process. Once infiltrated in paraffin, tissues are oriented in molds which are filled with wax; once positioned, the wax is cooled, solidifying the block and tissue. Other materials Paraffin wax does not always provide a sufficiently hard matrix for cutting very thin sections (which are especially important for electron microscopy). Paraffin wax may also be too soft in relation to the tissue, the heat of the melted wax may alter the tissue in undesirable ways, or the dehydrating or clearing chemicals may harm the tissue. Alternatives to paraffin wax include, epoxy, acrylic, agar, gelatin, celloidin, and other types of waxes. In electron microscopy epoxy resins are the most commonly employed embedding media, but acrylic resins are also used, particularly where immunohistochemistry is required. For tissues to be cut in a frozen state, tissues are placed in a water-based embedding medium. Pre-frozen tissues are placed into molds with the liquid embedding material, usually a water-based glycol, OCT, TBS, Cryogen, or resin, which is then frozen to form hardened blocks. Sectioning For light microscopy, a knife mounted in a microtome is used to cut tissue sections (typically between 5-15 micrometers thick) which are mounted on a glass microscope slide. For transmission electron microscopy (TEM), a diamond or glass knife mounted in an ultramicrotome is used to cut between 50 and 150 nanometer thick tissue sections. A limited number of manufacturers are recognized for their production of microtomes, including vibrating microtomes commonly referred to as vibratomes, primarily for research and clinical studies. Additionally, Leica Biosystems is known for its production of products related to light microscopy in the context of research and clinical studies. Staining Biological tissue has little inherent contrast in either the light or electron microscope. Staining is employed to give both contrast to the tissue as well as highlighting particular features of interest. When the stain is used to target a specific chemical component of the tissue (and not the general structure), the term histochemistry is used. Light microscopy Hematoxylin and eosin (H&E stain) is one of the most commonly used stains in histology to show the general structure of the tissue. Hematoxylin stains cell nuclei blue; eosin, an acidic dye, stains the cytoplasm and other tissues in different stains of pink. In contrast to H&E, which is used as a general stain, there are many techniques that more selectively stain cells, cellular components, and specific substances. A commonly performed histochemical technique that targets a specific chemical is the Perls' Prussian blue reaction, used to demonstrate iron deposits in diseases like hemochromatosis. The Nissl method for Nissl substance and Golgi's method (and related silver stains) are useful in identifying neurons are other examples of more specific stains. Historadiography In historadiography, a slide (sometimes stained histochemically) is X-rayed. More commonly, autoradiography is used in visualizing the locations to which a radioactive substance has been transported within the body, such as cells in S phase (undergoing DNA replication) which incorporate tritiated thymidine, or sites to which radiolabeled nucleic acid probes bind in in situ hybridization. For autoradiography on a microscopic level, the slide is typically dipped into liquid nuclear tract emulsion, which dries to form the exposure film. Individual silver grains in the film are visualized with dark field microscopy. Immunohistochemistry Recently, antibodies have been used to specifically visualize proteins, carbohydrates, and lipids. This process is called immunohistochemistry, or when the stain is a fluorescent molecule, immunofluorescence. This technique has greatly increased the ability to identify categories of cells under a microscope. Other advanced techniques, such as nonradioactive in situ hybridization, can be combined with immunochemistry to identify specific DNA or RNA molecules with fluorescent probes or tags that can be used for immunofluorescence and enzyme-linked fluorescence amplification (especially alkaline phosphatase and tyramide signal amplification). Fluorescence microscopy and confocal microscopy are used to detect fluorescent signals with good intracellular detail. Electron microscopy For electron microscopy heavy metals are typically used to stain tissue sections. Uranyl acetate and lead citrate are commonly used to impart contrast to tissue in the electron microscope. Specialized techniques Cryosectioning Similar to the frozen section procedure employed in medicine, cryosectioning is a method to rapidly freeze, cut, and mount sections of tissue for histology. The tissue is usually sectioned on a cryostat or freezing microtome. The frozen sections are mounted on a glass slide and may be stained to enhance the contrast between different tissues. Unfixed frozen sections can be used for studies requiring enzyme localization in tissues and cells. Tissue fixation is required for certain procedures such as antibody-linked immunofluorescence staining. Frozen sections are often prepared during surgical removal of tumors to allow rapid identification of tumor margins, as in Mohs surgery, or determination of tumor malignancy, when a tumor is discovered incidentally during surgery. Ultramicrotomy Ultramicrotomy is a method of preparing extremely thin sections for transmission electron microscope (TEM) analysis. Tissues are commonly embedded in epoxy or other plastic resin. Very thin sections (less than 0.1 micrometer in thickness) are cut using diamond or glass knives on an ultramicrotome. Artifacts Artifacts are structures or features in tissue that interfere with normal histological examination. Artifacts interfere with histology by changing the tissues appearance and hiding structures. Tissue processing artifacts can include pigments formed by fixatives, shrinkage, washing out of cellular components, color changes in different tissues types and alterations of the structures in the tissue. An example is mercury pigment left behind after using Zenker's fixative to fix a section. Formalin fixation can also leave a brown to black pigment under acidic conditions. History In the 17th century the Italian Marcello Malpighi used microscopes to study tiny biological entities; some regard him as the founder of the fields of histology and microscopic pathology. Malpighi analyzed several parts of the organs of bats, frogs and other animals under the microscope. While studying the structure of the lung, Malpighi noticed its membranous alveoli and the hair-like connections between veins and arteries, which he named capillaries. His discovery established how the oxygen breathed in enters the blood stream and serves the body. In the 19th century histology was an academic discipline in its own right. The French anatomist Xavier Bichat introduced the concept of tissue in anatomy in 1801, and the term "histology", coined to denote the "study of tissues", first appeared in a book by Karl Meyer in 1819. Bichat described twenty-one human tissues, which can be subsumed under the four categories currently accepted by histologists. The usage of illustrations in histology, deemed as useless by Bichat, was promoted by Jean Cruveilhier. In the early 1830s Purkynĕ invented a microtome with high precision. During the 19th century many fixation techniques were developed by Adolph Hannover (solutions of chromates and chromic acid), Franz Schulze and Max Schultze (osmic acid), Alexander Butlerov (formaldehyde) and Benedikt Stilling (freezing). Mounting techniques were developed by Rudolf Heidenhain (1824–1898), who introduced gum Arabic; Salomon Stricker (1834–1898), who advocated a mixture of wax and oil; and Andrew Pritchard (1804–1884) who, in 1832, used a gum/isinglass mixture. In the same year, Canada balsam appeared on the scene, and in 1869 Edwin Klebs (1834–1913) reported that he had for some years embedded his specimens in paraffin. The 1906 Nobel Prize in Physiology or Medicine was awarded to histologists Camillo Golgi and Santiago Ramon y Cajal. They had conflicting interpretations of the neural structure of the brain based on differing interpretations of the same images. Ramón y Cajal won the prize for his correct theory, and Golgi for the silver-staining technique that he invented to make it possible. Future directions In vivo histology Currently there is intense interest in developing techniques for in vivo histology (predominantly using MRI), which would enable doctors to non-invasively gather information about healthy and diseased tissues in living patients, rather than from fixed tissue samples. See also National Society for Histotechnology Slice preparation Notes References External links Histotechnology Staining Histochemistry Anatomy Laboratory healthcare occupations
0.77319
0.99788
0.771551
Drug class
A drug class is a group of medications and other compounds that have similar chemical structures, the same mechanism of action (i.e. binding to the same biological target), similar modes of action, and/or are used to treat the similar diseases. The Food and Drug Administration (FDA) has worked on classifying and licensing new medications for many years. However, the FDA's Drug Evaluation and Research Center categorizes these new medications based on both their chemical and therapeutic class. In several dominant drug classification systems, these four types of classifications form a hierarchy. For example, the fibrates are a chemical class of drugs (amphipathic carboxylic acids) that share the same mechanism of action (PPAR agonist) and mode of action (reducing blood triglycerides), and that are used to prevent and treat the same disease (atherosclerosis). Conversely, not all PPAR agonists are fibrates, not all triglyceride lowering agents are PPAR agonists, and not all drugs used to treat atherosclerosis are triglyceride-lowering agents. A drug class is typically defined by a prototype drug, the most important, and typically the first developed drug within the class, used as a reference for comparison. Comprehensive systems Anatomical Therapeutic Chemical Classification System (ATC) – Combines classification by organ system and therapeutic, pharmacological, and chemical properties into five levels. Systematized Nomenclature of Medicine (SNOMED) – includes a section devoted to drug classification Chemical class This type of categorisation of drugs is from a chemical perspective and categorises them by their chemical structure. Examples of drug classes that are based on chemical structures include: Analgesic Benzodiazepine Cannabinoid Cardiac glycoside Fibrate Gabapentinoid Steroid Thiazide diuretic Triptan β-lactam antibiotic Mechanism of action This type of categorisation is from a pharmacological perspective and categorises them by their biological target. Drug classes that share a common molecular mechanism of action modulate the activity of a specific biological target. The definition of a mechanism of action also includes the type of activity at that biological target. For receptors, these activities include agonist, antagonist, inverse agonist, or modulator. Enzyme target mechanisms include activator or inhibitor. Ion channel modulators include opener or blocker. The following are specific examples of drug classes whose definition is based on a specific mechanism of action: 5-alpha-reductase inhibitor ACE inhibitor Alpha-adrenergic agonist Angiotensin II receptor antagonist Beta blocker Cholinergic Dopaminergic GABAergic Incretin mimetic Nonsteroidal anti-inflammatory drug − cyclooxygenase inhibitor Proton-pump inhibitor Renin inhibitor Selective glucocorticoid receptor modulator Serotonergic Statin – HMG-CoA reductase inhibitor Mode of action This type of categorisation of drugs is from a biological perspective and categorises them by the anatomical or functional change they induce. Drug classes that are defined by common modes of action (i.e. the functional or anatomical change they induce) include: Antifungals Antimicrobials Antithrombotics Bronchodilator Chronotrope (positive or negative) Decongestant Diuretic or Antidiuretic Inotrope (positive or negative) Therapeutic class This type of categorisation of drugs is from a medical perspective and categorises them by the pathology they are used to treat. Drug classes that are defined by their therapeutic use (the pathology they are intended to treat) include: Analgesics Antibiotic Anticancer Anticoagulant Antidepressant Antidiabetic Antiepileptic Antipsychotic Antispasmodic Antiviral Cardiovascular Depressant Sedative Stimulant Amalgamated classes Some drug classes have been amalgamated from these three principles to meet practical needs. The class of nonsteroidal anti-inflammatory drugs (NSAIDs) is one such example. Strictly speaking, and also historically, the wider class of anti-inflammatory drugs also comprises steroidal anti-inflammatory drugs. These drugs were in fact the predominant anti-inflammatories during the decade leading up to the introduction of the term "nonsteroidal anti-inflammatory drugs." Because of the disastrous reputation that the corticosteroids had got in the 1950s, the new term, which offered to signal that an anti-inflammatory drug was not a steroid, rapidly gained currency. The drug class of "nonsteroidal anti-inflammatory drugs" (NSAIDs) is thus composed by one element ("anti-inflammatory") that designates the mechanism of action, and one element ("nonsteroidal") that separates it from other drugs with that same mechanism of action. Similarly, one might argue that the class of disease-modifying anti-rheumatic drugs (DMARD) is composed by one element ("disease-modifying") that albeit vaguely designates a mechanism of action, and one element ("anti-rheumatic drug") that indicates its therapeutic use. Disease-modifying antirheumatic drug (DMARD) Nonsteroidal anti-inflammatory drug (NSAID) Other systems of classification Other systems of drug classification exist, for example the Biopharmaceutics Classification System which determines a drugs' attributes by solubility and intestinal permeability. Legal classification For the Canadian legal classification, see Controlled Drugs and Substances Act For the UK legal classification, see Drugs controlled by the UK Misuse of Drugs Act For the US legal classification, see Pregnancy category is defined using a variety of systems by different jurisdictions References External links Pharmacodynamics Medicinal chemistry Pharmacological classification systems
0.775054
0.99531
0.771419
ChemDraw
ChemDraw is a molecule editor first developed in 1985 by Selena "Sally" Evans, her husband David A. Evans, and Stewart Rubenstein (later by the cheminformatics company CambridgeSoft). The company was sold to PerkinElmer in 2011. ChemDraw, along with Chem3D and ChemFinder, is part of the ChemOffice suite of programs and is available for Macintosh and Microsoft Windows. Features of ChemDraw 12.0 Chemical structure to name conversion Chemical name to structure conversion NMR spectrum simulation (1H and 13C) Mass spectrum simulation Structure cleanup Draw ligand Structure An extensive collection of templates, including style templates for most major chemical journals. Export to SVG Export to PDF (Mac Version only) File format The native file formats for ChemDraw are the binary CDX and the preferred XML-based CDXML formats. ChemDraw can also import from, and export to, MOL, SDF, and SKC chemical file formats. Plugins SDK for ChemDraw enables third-party developers to write plugins. For example, Quick HotKey helps to set up HotKeys in interactive mode instead of manually editing the text file. The plugin website http://www.cambridgesoft.com/services/documentation/sdk/ appears to have been abandoned, and redirects to Revvity Signals' website. References See also Molecule editor Chemistry software Science software for macOS Science software for Windows
0.779103
0.990129
0.771413
Dissociation (chemistry)
Dissociation in chemistry is a general process in which molecules (or ionic compounds such as salts, or complexes) separate or split into other things such as atoms, ions, or radicals, usually in a reversible manner. For instance, when an acid dissolves in water, a covalent bond between an electronegative atom and a hydrogen atom is broken by heterolytic fission, which gives a proton (H+) and a negative ion. Dissociation is the opposite of association or recombination. Dissociation constant For reversible dissociations in a chemical equilibrium AB <=> A + B the dissociation constant Kd is the ratio of dissociated to undissociated compound where the brackets denote the equilibrium concentrations of the species. Dissociation degree The dissociation degree is the fraction of original solute molecules that have dissociated. It is usually indicated by the Greek symbol α. More accurately, degree of dissociation refers to the amount of solute dissociated into ions or radicals per mole. In case of very strong acids and bases, degree of dissociation will be close to 1. Less powerful acids and bases will have lesser degree of dissociation. There is a simple relationship between this parameter and the van 't Hoff factor . If the solute substance dissociates into ions, then For instance, for the following dissociation KCl <=> K+ + Cl- As , we would have that . Salts The dissociation of salts by solvation in a solution, such as water, means the separation of the anions and cations. The salt can be recovered by evaporation of the solvent. An electrolyte refers to a substance that contains free ions and can be used as an electrically conductive medium. Most of the solute does not dissociate in a weak electrolyte, whereas in a strong electrolyte a higher ratio of solute dissociates to form free ions. A weak electrolyte is a substance whose solute exists in solution mostly in the form of molecules (which are said to be "undissociated"), with only a small fraction in the form of ions. Simply because a substance does not readily dissolve does not make it a weak electrolyte. Acetic acid and ammonium are good examples. Acetic acid is extremely soluble in water, but most of the compound dissolves into molecules, rendering it a weak electrolyte. Weak bases and weak acids are generally weak electrolytes. In an aqueous solution there will be some and some and . A strong electrolyte is a solute that exists in solution completely or nearly completely as ions. Again, the strength of an electrolyte is defined as the percentage of solute that is ions, rather than molecules. The higher the percentage, the stronger the electrolyte. Thus, even if a substance is not very soluble, but does dissociate completely into ions, the substance is defined as a strong electrolyte. Similar logic applies to a weak electrolyte. Strong acids and bases are good examples, such as HCl and . These will all exist as ions in an aqueous medium. Gases The degree of dissociation in gases is denoted by the symbol , where refers to the percentage of gas molecules which dissociate. Various relationships between and exist depending on the stoichiometry of the equation. The example of dinitrogen tetroxide dissociating to nitrogen dioxide will be taken. If the initial concentration of dinitrogen tetroxide is 1 mole per litre, this will decrease by at equilibrium giving, by stoichiometry, moles of . The equilibrium constant (in terms of pressure) is given by the equation where represents the partial pressure. Hence, through the definition of partial pressure and using to represent the total pressure and to represent the mole fraction; The total number of moles at equilibrium is , which is equivalent to . Thus, substituting the mole fractions with actual values in term of and simplifying; This equation is in accordance with Le Chatelier's principle. will remain constant with temperature. The addition of pressure to the system will increase the value of , so must decrease to keep constant. In fact, increasing the pressure of the equilibrium favours a shift to the left favouring the formation of dinitrogen tetroxide (as on this side of the equilibrium there is less pressure since pressure is proportional to number of moles) hence decreasing the extent of dissociation . Acids in aqueous solution The reaction of an acid in water solvent is often described as a dissociation HA <=> H+ + A- where HA is a proton acid such as acetic acid, CH3COOH. The double arrow means that this is an equilibrium process, with dissociation and recombination occurring at the same time. This implies that the acid dissociation constant However a more explicit description is provided by the Brønsted–Lowry acid–base theory, which specifies that the proton H+ does not exist as such in solution but is instead accepted by (bonded to) a water molecule to form the hydronium ion H3O+. The reaction can therefore be written as HA + H2O <=> H3O+ + A- and better described as an ionization or formation of ions (for the case when HA has no net charge). The equilibrium constant is then where [H_2O] is not included because in dilute solution the solvent is essentially a pure liquid with a thermodynamic activity of one. Ka is variously named a dissociation constant, an acid ionization constant, an acidity constant or an ionization constant. It serves as an indicator of the acid strength: stronger acids have a higher Ka value (and a lower pKa value). Fragmentation Fragmentation of a molecule can take place by a process of heterolysis or homolysis. Receptors Receptors are proteins that bind small ligands. The dissociation constant Kd is used as indicator of the affinity of the ligand to the receptor. The higher the affinity of the ligand for the receptor the lower the Kd value (and the higher the pKd value). See also Bond-dissociation energy Photodissociation, dissociation of molecules by photons (light, gamma rays, x-rays) Radiolysis, dissociation of molecules by ionizing radiation Thermal decomposition References Chemical processes Equilibrium chemistry
0.778921
0.990331
0.77139
Isomorphism (sociology)
In sociology, an isomorphism is a similarity of the processes or structure of one organization to those of another, be it the result of imitation or independent development under similar constraints. The concept of institutional isomorphism was primarily developed by Paul DiMaggio and Walter Powell. The concept appears in their 1983 paper The iron cage revisited: institutional isomorphism and collective rationality in organizational fields. The term is borrowed from the mathematical concept of isomorphism. Isomorphism in the context of globalization, is an idea of contemporary national societies that is addressed by the institutionalization of world models constructed and propagated through global cultural and associational processes. As it is emphasized by realist theories the heterogeneity of economic and political resource or local cultural origins by the micro-phenomenological theories, many ideas suggest that the trajectory of change in political units is towards homogenization around the world. Policy convergence is another example of isomorphism across nation states, for example in the European Union where states harmonise policies driven by structural pressures such as directives, regulations, cohesion funds and collaboration mechanisms. This is in contrast to theories of policy transfer or diffusion which generally give more agency to states in adopting policies. Types of institutional isomorphism There are three main types of institutional isomorphism: normative, coercive and mimetic. The development that these three types of isomorphism can also create isomorphic paradoxes that hinder such development. Specifically, these isomorphic paradoxes are related to an organization's remit, resources, accountability, and professionalization. Normative isomorphic change Normative isomorphic change is driven by pressures brought about by professions. One mode is the legitimization inherent in the licensing and crediting of educational achievement. The other is the inter-organizational networks that span organizations. Norms developed during education are entered into organizations. Inter-hiring between existing industrial firms also encourages isomorphism. People from the same educational backgrounds will approach problems in much the same way. Socialization on the job reinforces these conformities. Normative isomorphism is in contrast to mimetic isomorphism, where uncertainty encourages imitation, and similar to coercive isomorphism, where organizations are forced to change by external forces. Coercive isomorphic change Coercive isomorphic change involves pressures on an organization from other organizations upon which they are dependent and by cultural expectations from society. Some are governmental mandates, some are derived from contract law or financial reporting requirements. "Organizations are increasingly homogeneous within given domains and increasingly organized around rituals of conformity to wider institutions". Political organizations normalize this concept definitively. Coercive isomorphism is in contrast to mimetic isomorphism, where uncertainty encourages imitation, and similar to normative isomorphism, where professional standards or networks influence change. Large corporations can have similar impact on their subsidiaries. Mimetic isomorphism Mimetic isomorphism in organization theory refers to the tendency of an organization to imitate another organization's structure because of the belief that the structure of the latter organization is beneficial. This behavior happens primarily when an organization's goals or means of achieving these goals is unclear. In this case, mimicking another organization perceived as legitimate becomes a "safe" way to proceed. An example is a struggling regional university hiring a star faculty member in order to be perceived as more similar to organizations that are revered (e.g., an Ivy League institution). Mimetic isomorphism is in contrast to coercive isomorphism, where organizations are forced to change by external forces, or normative isomorphism, where professional standards or networks influence change. The term had been applied by companies such as McKinsey & Co as part of their recommendations to companies undergoing restructuring or other organizational transformations. Such similarities so called isomorphic changes are found by researchers, explaining, despite all possible configurations of local economic forces, power relationships, and forms of traditional culture it might consist of, a previously isolated island society that made contact with the rest of the globe would quickly take on standardized forms and appear to be similar to a hundred other nation-states around the world. Isomorphic developments of same conclusion are reported from nay nation-states' features, that is, constitutional forms highlighting both state power and individual rights, mass schooling systems organized around a fairly standard curriculum, rationalized economic and demographic record keeping and data systems, antinatalist population control policies intended to enhance national development, formally equalized female status and rights, expanded human rights in general, expansive environmental policies, development-oriented economic policy, universalistic welfare systems, standard definitions of disease and health care, and even some basic demographic variables. These isomorphisms are difficultly accounted by theories reasoning from the differences among national economies and cultural traditions, however, they are sensible outcomes if nation-states are enactments of the world cultural order. See also New institutionalism Policy transfer References Sociological terminology
0.786953
0.980209
0.771379
Water model
In computational chemistry, a water model is used to simulate and thermodynamically calculate water clusters, liquid water, and aqueous solutions with explicit solvent. The models are determined from quantum mechanics, molecular mechanics, experimental results, and these combinations. To imitate a specific nature of molecules, many types of models have been developed. In general, these can be classified by the following three points; (i) the number of interaction points called site, (ii) whether the model is rigid or flexible, (iii) whether the model includes polarization effects. An alternative to the explicit water models is to use an implicit solvation model, also termed a continuum model, an example of which would be the COSMO solvation model or the polarizable continuum model (PCM) or a hybrid solvation model. Simple water models The rigid models are considered the simplest water models and rely on non-bonded interactions. In these models, bonding interactions are implicitly treated by holonomic constraints. The electrostatic interaction is modeled using Coulomb's law, and the dispersion and repulsion forces using the Lennard-Jones potential. The potential for models such as TIP3P (transferable intermolecular potential with 3 points) and TIP4P is represented by where kC, the electrostatic constant, has a value of 332.1 Å·kcal/(mol·e²) in the units commonly used in molecular modeling; qi and qj are the partial charges relative to the charge of the electron; rij is the distance between two atoms or charged sites; and A and B are the Lennard-Jones parameters. The charged sites may be on the atoms or on dummy sites (such as lone pairs). In most water models, the Lennard-Jones term applies only to the interaction between the oxygen atoms. The figure below shows the general shape of the 3- to 6-site water models. The exact geometric parameters (the OH distance and the HOH angle) vary depending on the model. 2-site A 2-site model of water based on the familiar three-site SPC model (see below) has been shown to predict the dielectric properties of water using site-renormalized molecular fluid theory. 3-site Three-site models have three interaction points corresponding to the three atoms of the water molecule. Each site has a point charge, and the site corresponding to the oxygen atom also has the Lennard-Jones parameters. Since 3-site models achieve a high computational efficiency, these are widely used for many applications of molecular dynamics simulations. Most of the models use a rigid geometry matching that of actual water molecules. An exception is the SPC model, which assumes an ideal tetrahedral shape (HOH angle of 109.47°) instead of the observed angle of 104.5°. The table below lists the parameters for some 3-site models. The SPC/E model adds an average polarization correction to the potential energy function: where μ is the electric dipole moment of the effectively polarized water molecule (2.35 D for the SPC/E model), μ0 is the dipole moment of an isolated water molecule (1.85 D from experiment), and αi is an isotropic polarizability constant, with a value of . Since the charges in the model are constant, this correction just results in adding 1.25 kcal/mol (5.22 kJ/mol) to the total energy. The SPC/E model results in a better density and diffusion constant than the SPC model. The TIP3P model implemented in the CHARMM force field is a slightly modified version of the original. The difference lies in the Lennard-Jones parameters: unlike TIP3P, the CHARMM version of the model places Lennard-Jones parameters on the hydrogen atoms too, in addition to the one on oxygen. The charges are not modified. Three-site model (TIP3P) has better performance in calculating specific heats. Flexible SPC water model The flexible simple point-charge water model (or flexible SPC water model) is a re-parametrization of the three-site SPC water model. The SPC model is rigid, whilst the flexible SPC model is flexible. In the model of Toukan and Rahman, the O–H stretching is made anharmonic, and thus the dynamical behavior is well described. This is one of the most accurate three-center water models without taking into account the polarization. In molecular dynamics simulations it gives the correct density and dielectric permittivity of water. Flexible SPC is implemented in the programs MDynaMix and Abalone. Other models Ferguson (flexible SPC) CVFF (flexible) MG (flexible and dissociative) KKY potential (flexible model). BLXL (smear charged potential). 4-site The four-site models have four interaction points by adding one dummy atom near of the oxygen along the bisector of the HOH angle of the three-site models (labeled M in the figure). The dummy atom only has a negative charge. This model improves the electrostatic distribution around the water molecule. The first model to use this approach was the Bernal–Fowler model published in 1933, which may also be the earliest water model. However, the BF model doesn't reproduce well the bulk properties of water, such as density and heat of vaporization, and is thus of historical interest only. This is a consequence of the parameterization method; newer models, developed after modern computers became available, were parameterized by running Metropolis Monte Carlo or molecular dynamics simulations and adjusting the parameters until the bulk properties are reproduced well enough. The TIP4P model, first published in 1983, is widely implemented in computational chemistry software packages and often used for the simulation of biomolecular systems. There have been subsequent reparameterizations of the TIP4P model for specific uses: the TIP4P-Ew model, for use with Ewald summation methods; the TIP4P/Ice, for simulation of solid water ice; TIP4P/2005, a general parameterization for simulating the entire phase diagram of condensed water; and TIP4PQ/2005, a similar model but designed to accurately describe the properties of solid and liquid water when quantum effects are included in the simulation. Most of the four-site water models use an OH distance and HOH angle which match those of the free water molecule. One exception is the OPC model, in which no geometry constraints are imposed other than the fundamental C2v molecular symmetry of the water molecule. Instead, the point charges and their positions are optimized to best describe the electrostatics of the water molecule. OPC reproduces a comprehensive set of bulk properties more accurately than several of the commonly used rigid n-site water models. The OPC model is implemented within the AMBER force field. Others: q-TIP4P/F (flexible) TIP4P/2005f (flexible) 5-site The 5-site models place the negative charge on dummy atoms (labelled L) representing the lone pairs of the oxygen atom, with a tetrahedral-like geometry. An early model of these types was the BNS model of Ben-Naim and Stillinger, proposed in 1971, soon succeeded by the ST2 model of Stillinger and Rahman in 1974. Mainly due to their higher computational cost, five-site models were not developed much until 2000, when the TIP5P model of Mahoney and Jorgensen was published. When compared with earlier models, the TIP5P model results in improvements in the geometry for the water dimer, a more "tetrahedral" water structure that better reproduces the experimental radial distribution functions from neutron diffraction, and the temperature of maximal density of water. The TIP5P-E model is a reparameterization of TIP5P for use with Ewald sums. Note, however, that the BNS and ST2 models do not use Coulomb's law directly for the electrostatic terms, but a modified version that is scaled down at short distances by multiplying it by the switching function S(r): Thus, the RL and RU parameters only apply to BNS and ST2. 6-site Originally designed to study water/ice systems, a 6-site model that combines all the sites of the 4- and 5-site models was developed by Nada and van der Eerden. Since it had a very high melting temperature when employed under periodic electrostatic conditions (Ewald summation), a modified version was published later optimized by using the Ewald method for estimating the Coulomb interaction. Other The effect of explicit solute model on solute behavior in biomolecular simulations has been also extensively studied. It was shown that explicit water models affected the specific solvation and dynamics of unfolded peptides, while the conformational behavior and flexibility of folded peptides remained intact. MB model. A more abstract model resembling the Mercedes-Benz logo that reproduces some features of water in two-dimensional systems. It is not used as such for simulations of "real" (i.e., three-dimensional) systems, but it is useful for qualitative studies and for educational purposes. Coarse-grained models. One- and two-site models of water have also been developed. In coarse-grain models, each site can represent several water molecules. Many-body models. Water models built using training-set configurations solved quantum mechanically, which then use machine learning protocols to extract potential-energy surfaces. These potential-energy surfaces are fed into MD simulations for an unprecedented degree of accuracy in computing physical properties of condensed phase systems. Another classification of many body models is on the basis of the expansion of the underlying electrostatics, e.g., the SCME (Single Center Multipole Expansion) model Computational cost The computational cost of a water simulation increases with the number of interaction sites in the water model. The CPU time is approximately proportional to the number of interatomic distances that need to be computed. For the 3-site model, 9 distances are required for each pair of water molecules (every atom of one molecule against every atom of the other molecule, or 3 × 3). For the 4-site model, 10 distances are required (every charged site with every charged site, plus the O–O interaction, or 3 × 3 + 1). For the 5-site model, 17 distances are required (4 × 4 + 1). Finally, for the 6-site model, 26 distances are required (5 × 5 + 1). When using rigid water models in molecular dynamics, there is an additional cost associated with keeping the structure constrained, using constraint algorithms (although with bond lengths constrained it is often possible to increase the time step). See also Water (properties) Water (data page) Water dimer Force field (chemistry) Comparison of force field implementations Molecular mechanics Molecular modelling Comparison of software for molecular mechanics modeling Solvent models References Water Computational chemistry
0.786271
0.981056
0.771376
Contamination
Contamination is the presence of a constituent, impurity, or some other undesirable element that renders something unsuitable, unfit or harmful for physical body, natural environment, workplace, etc. Types of contamination Within the sciences, the word "contamination" can take on a variety of subtle differences in meaning, whether the contaminant is a solid or a liquid, as well as the variance of environment the contaminant is found to be in. A contaminant may even be more abstract, as in the case of an unwanted energy source that may interfere with a process. The following represent examples of different types of contamination based on these and other variances. Chemical contamination In chemistry, the term "contamination" usually describes a single constituent, but in specialized fields the term can also mean chemical mixtures, even up to the level of cellular materials. All chemicals contain some level of impurity. Contamination may be recognized or not and may become an issue if the impure chemical causes additional chemical reactions when mixed with other chemicals or mixtures. Chemical reactions resulting from the presence of an impurity may at times be beneficial, in which case the label "contaminant" may be replaced with "reactant" or "catalyst." (This may be true even in physical chemistry, where, for example, the introduction of an impurity in an intrinsic semiconductor positively increases conductivity.) If the additional reactions are detrimental, other terms are often applied such as "toxin", "poison", or pollutant, depending on the type of molecule involved. Chemical decontamination of substance can be achieved through decomposition, neutralization, and physical processes, though a clear understanding of the underlying chemistry is required. Contamination of pharmaceutics and therapeutics is notoriously dangerous and creates both perceptual and technical challenges. Environmental contamination In environmental chemistry, the term "contamination" is in some cases virtually equivalent to pollution, where the main interest is the harm done on a large scale to humans, organisms, or environments. An environmental contaminant may be chemical in nature, though it may also be a biological (pathogenic bacteria, virus, invasive species) or physical (energy) agent. Environmental monitoring is one mechanism available to scientists to detect contamination activities early before they become too detrimental. Agricultural contamination Another type of environmental contaminant can be found in the form of genetically modified organisms (GMOs), specifically when they come in contact with organic agriculture. This sort of contamination can result in the decertification of a farm. This sort of contamination can at times be difficult to control, necessitating mechanisms for compensating farmers where there has been contamination by GMOs. A Parliamentary Inquiry in Western Australia considered a range of options for compensating farmers whose farms had been contaminated by GMOs but ultimately settled on recommending no action. Food, beverage, and pharmaceutical contamination In food chemistry and medicinal chemistry, the term "contamination" is used to describe harmful intrusions, such as the presence of toxins or pathogens in food or pharmaceutical drugs. Radioactive contamination In environments where nuclear safety and radiation protection are required, radioactive contamination is a concern. Radioactive substances can appear on surfaces, or within solids, liquids, or gases (including the human body), where their presence is unintended or undesirable, and processes can give rise to their presence in such places. Several examples of radioactive contamination include: residual radioactive material remaining at a site after the completion of decommissioning of a site where there was a nuclear reactor, such as a power plant, experimental reactor, isotope reactor, or a nuclear powered ship or submarine ingested or absorbed radioactive material that contaminates a biological entity, whether unintentionally or intentionally (such as with radiopharmaceuticals escape of elements after nuclear accident, such as the contamination of Iodine-131 and Caesium-137 after the nuclear disaster in Chernobyl, Ukraine. Note that the term "radioactive contamination" may have a connotation that is not intended. The term refers only to the presence of radioactivity and gives no indication itself of the magnitude of the hazard involved. However, radioactivity can be measured as a quantity in a given location or on a surface, or on a unit area of a surface, such as a square meter or centimeter. Like environmental monitoring, radiation monitoring can be employed to catch contamination-causing activities before much harm. Interplanetary contamination Interplanetary contamination occurs when a planetary body is biologically contaminated by a space probe or spacecraft, either deliberately or unintentionally. This can work both on arrival to the foreign planetary body and upon return to Earth. Contaminated evidence In forensic science, evidence can become contaminated. Contamination of fingerprints, hair, skin, or DNA—from first responders or from sources not related to the ongoing investigation, such as family members or friends of the victim who are not suspects—can lead to wrongful convictions, mistrials, or dismissal of evidence. Contaminated samples In the biological sciences, accidental introduction of "foreign" material can seriously distort the results of experiments where small samples are used. In cases where the contaminant is a living microorganism, it can often multiply to dominate the sample and render it useless, as in contaminated cell culture lines. A similar affect can be seen in geology, geochemistry, and archaeology, where even a few grains of a material can distort results of sophisticated experiments. Food contaminant detection method The conventional food contaminant test methods may be limited by complicated/tedious sample preparing procedure, long testing time, sumptuous instrument, and professional operator. However, some rapid, novel, sensitive, and easy to use and affordable methods were developed including: Cyanidin quantification by naphthalimide-based azo dye colorimetric probe. Lead quantification by modified immunoassay test strip based on a heterogeneously sized gold amplified probe. Microbial toxin by HPLC with UV-Vis or fluorescence detection and competitive immunoassays with ELISA configuration. Bacterial virulence genes detection reverse-transcription polymerase chain reaction (RT-PCR) and DNA colony hybridization. Pesticide detection and quantification by strip-based immunoassay, a test strip based on functionalized AuNPs, and test strip, surface-enhanced raman spectroscopy (SERS). Enrofloxacin (chickens antibiotic) quantification by a Ru(phen)3 2+- doped silica fluorescent nanoparticle (NP) based immunochromatographic test strip and a portable fluorescent strip reader. Nitrite quantification by The PRhB-based electrochemical sensors and Ion selective electrodes (ISEs). See also Exposome References External links Environmental science Geochemistry Quality control Adulteration
0.778098
0.99135
0.771368
Non-proteinogenic amino acids
In biochemistry, non-coded or non-proteinogenic amino acids are distinct from the 22 proteinogenic amino acids (21 in eukaryotes), which are naturally encoded in the genome of organisms for the assembly of proteins. However, over 140 non-proteinogenic amino acids occur naturally in proteins and thousands more may occur in nature or be synthesized in the laboratory. Chemically synthesized amino acids can be called unnatural amino acids. Unnatural amino acids can be synthetically prepared from their native analogs via modifications such as amine alkylation, side chain substitution, structural bond extension cyclization, and isosteric replacements within the amino acid backbone. Many non-proteinogenic amino acids are important: intermediates in biosynthesis, in post-translational formation of proteins, in a physiological role (e.g. components of bacterial cell walls, neurotransmitters and toxins), natural or man-made pharmacological compounds, present in meteorites or used in prebiotic experiments (such as the Miller–Urey experiment), might be important neurotransmitters, such as γ-aminobutyric acid, and can play a crucial role in cellular bioenergetics, such as creatine. Definition by negation Technically, any organic compound with an amine (–NH2) and a carboxylic acid (–COOH) functional group is an amino acid. The proteinogenic amino acids are a small subset of this group that possess a central carbon atom (α- or 2-) bearing an amino group, a carboxyl group, a side chain and an α-hydrogen levo conformation, with the exception of glycine, which is achiral, and proline, whose amine group is a secondary amine and is consequently frequently referred to as an imino acid for traditional reasons, albeit not an imino. The genetic code encodes 20 standard amino acids for incorporation into proteins during translation. However, there are two extra proteinogenic amino acids: selenocysteine and pyrrolysine. These non-standard amino acids do not have a dedicated codon, but are added in place of a stop codon when a specific sequence is present, UGA codon and SECIS element for selenocysteine, UAG PYLIS downstream sequence for pyrrolysine. All other amino acids are termed "non-proteinogenic". There are various groups of amino acids: 20 standard amino acids 22 proteinogenic amino acids over 80 amino acids created abiotically in high concentrations about 900 are produced by natural pathways over 118 engineered amino acids have been placed into protein These groups overlap, but are not identical. All 22 proteinogenic amino acids are biosynthesised by organisms and some, but not all, of them also are abiotic (found in prebiotic experiments and meteorites). Some natural amino acids, such as norleucine, are misincorporated translationally into proteins due to infidelity of the protein-synthesis process. Many amino acids, such as ornithine, are metabolic intermediates produced biosynthetically, but not incorporated translationally into proteins. Post-translational modification of amino acid residues in proteins leads to the formation of many proteinaceous, but non-proteinogenic, amino acids. Other amino acids are solely found in abiotic mixes (e.g. α-methylnorvaline). Over 30 unnatural amino acids have been inserted translationally into protein in engineered systems, yet are not biosynthetic. Nomenclature In addition to the IUPAC numbering system to differentiate the various carbons in an organic molecule, by sequentially assigning a number to each carbon, including those forming a carboxylic group, the carbons along the side-chain of amino acids can also be labelled with Greek letters, where the α-carbon is the central chiral carbon possessing a carboxyl group, a side chain and, in α-amino acids, an amino group – the carbon in carboxylic groups is not counted. (Consequently, the IUPAC names of many non-proteinogenic α-amino acids start with 2-amino- and end in -ic acid.) Natural non-L-α-amino acids Most natural amino acids are α-amino acids in the L configuration, but some exceptions exist. Non-alpha Some non-α-amino acids exist in organisms. In these structures, the amine group is displaced further from the carboxylic acid end of the amino acid molecule. Thus a β-amino acid has the amine group bonded to the second carbon away, and a γ-amino acid has it on the third. Examples include β-alanine, GABA, and δ-aminolevulinic acid. The reason why α-amino acids are used in proteins has been linked to their frequency in meteorites and prebiotic experiments. An initial speculation on the deleterious properties of β-amino acids in terms of secondary structure turned out to be incorrect. D-amino acids Some amino acids contain the opposite absolute chirality, chemicals that are not available from normal ribosomal translation and transcription machinery. Most bacterial cells walls are formed by peptidoglycan, a polymer composed of amino sugars crosslinked with short oligopeptides bridged between each other. The oligopeptide is non-ribosomally synthesised and contains several peculiarities including D-amino acids, generally D-alanine and D-glutamate. A further peculiarity is that the former is racemised by a PLP-binding enzymes (encoded by alr or the homologue dadX), whereas the latter is racemised by a cofactor independent enzyme (murI). Some variants are present, in Thermotoga spp. D-Lysine is present and in certain vancomycin-resistant bacteria D-serine is present (vanT gene). Without a hydrogen on the α-carbon All proteinogenic amino acids have at least one hydrogen on the α-carbon. Glycine has two hydrogens, and all others have one hydrogen and one side-chain. Replacement of the remaining hydrogen with a larger substituent, such as a methyl group, distorts the protein backbone. In some fungi α-aminoisobutyric acid is produced as a precursor to peptides, some of which exhibit antibiotic properties. This compound is similar to alanine, but possesses an additional methyl group on the α-carbon instead of a hydrogen. It is therefore achiral. Another compound similar to alanine without an α-hydrogen is dehydroalanine, which possess a methylene sidechain. It is one of several naturally occurring dehydroamino acids. Twin amino acid stereocentres A subset of L-α-amino acids are ambiguous as to which of two ends is the α-carbon. In proteins a cysteine residue can form a disulfide bond with another cysteine residue, thus crosslinking the protein. Two crosslinked cysteines form a cystine molecule. Cysteine and methionine are generally produced by direct sulfurylation, but in some species they can be produced by transsulfuration, where the activated homoserine or serine is fused to a cysteine or homocysteine forming cystathionine. A similar compound is lanthionine, which can be seen as two alanine molecules joined via a thioether bond and is found in various organisms. Similarly, djenkolic acid, a plant toxin from jengkol beans, is composed of two cysteines connected by a methylene group. Diaminopimelic acid is both used as a bridge in peptidoglycan and is used a precursor to lysine (via its decarboxylation). Prebiotic amino acids and alternative biochemistries In meteorites and in prebiotic experiments (e.g. Miller–Urey experiment) many more amino acids than the twenty standard amino acids are found, several of which are at higher concentrations than the standard ones. It has been conjectured that if amino acid based life were to arise elsewhere in the universe, no more than 75% of the amino acids would be in common. The most notable anomaly is the lack of aminobutyric acid. Straight side chain The genetic code has been described as a frozen accident and the reasons why there is only one standard amino acid with a straight chain, alanine, could simply be redundancy with valine, leucine and isoleucine. However, straight chained amino acids are reported to form much more stable alpha helices. Chalcogen Serine, homoserine, O-methylhomoserine and O-ethylhomoserine possess a hydroxymethyl, hydroxyethyl, O-methylhydroxymethyl and O-methylhydroxyethyl side chain; whereas cysteine, homocysteine, methionine and ethionine possess the thiol equivalents. The selenol equivalents are selenocysteine, selenohomocysteine, selenomethionine and selenoethionine. Amino acids with the next chalcogen down are also found in nature: several species such as Aspergillus fumigatus, Aspergillus terreus, and Penicillium chrysogenum in the absence of sulfur are able to produce and incorporate into protein tellurocysteine and telluromethionine. Expanded genetic code Roles In cells, especially autotrophs, several non-proteinogenic amino acids are found as metabolic intermediates. However, despite the catalytic flexibility of PLP-binding enzymes, many amino acids are synthesised as keto acids (such as 4-methyl-2-oxopentanoate to leucine) and aminated in the last step, thus keeping the number of non-proteinogenic amino acid intermediates fairly low. Ornithine and citrulline occur in the urea cycle, part of amino acid catabolism (see below). In addition to primary metabolism, several non-proteinogenic amino acids are precursors or the final production in secondary metabolism to make small compounds or non-ribosomal peptides (such as some toxins). Post-translationally incorporated into protein Despite not being encoded by the genetic code as proteinogenic amino acids, some non-standard amino acids are nevertheless found in proteins. These are formed by post-translational modification of the side chains of standard amino acids present in the target protein. These modifications are often essential for the function or regulation of a protein; for example, in γ-carboxyglutamate the carboxylation of glutamate allows for better binding of calcium cations, and in hydroxyproline the hydroxylation of proline is critical for maintaining connective tissues. Another example is the formation of hypusine in the translation initiation factor EIF5A, through modification of a lysine residue. Such modifications can also determine the localization of the protein, for example, the addition of long hydrophobic groups can cause a protein to bind to a phospholipid membrane. There is some preliminary evidence that aminomalonic acid may be present, possibly by misincorporation, in protein. Toxic analogues Several non-proteinogenic amino acids are toxic due to their ability to mimic certain properties of proteinogenic amino acids, such as thialysine. Some non-proteinogenic amino acids are neurotoxic by mimicking amino acids used as neurotransmitters (that is, not for protein biosynthesis), including quisqualic acid, canavanine and azetidine-2-carboxylic acid. Cephalosporin C has an α-aminoadipic acid (homoglutamate) backbone that is amidated with a cephalosporin moiety. Penicillamine is a therapeutic amino acid, whose mode of action is unknown. Naturally-occurring cyanotoxins can also include non-proteinogenic amino acids. Microcystin and nodularin, for example, are both derived from ADDA, a β-amino acid. Not amino acids Taurine is an amino sulfonic acid and not an amino carboxylic acid, however it is occasionally considered as such as the amounts required to suppress the auxotroph in certain organisms (such as cats) are closer to those of "essential amino acids" (amino acid auxotrophy) than of vitamins (cofactor auxotrophy). The osmolytes, sarcosine and glycine betaine are derived from amino acids, but have a secondary and quaternary amine respectively. See also Dicarboxylic acid Notes References Amino acids
0.779938
0.988986
0.771348
Brønsted–Lowry acid–base theory
The Brønsted–Lowry theory (also called proton theory of acids and bases) is an acid–base reaction theory which was first developed by Johannes Nicolaus Brønsted and Thomas Martin Lowry independently in 1923. The basic concept of this theory is that when an acid and a base react with each other, the acid forms its conjugate base, and the base forms its conjugate acid by exchange of a proton (the hydrogen cation, or H+). This theory generalises the Arrhenius theory. Definitions of acids and bases In the Arrhenius theory, acids are defined as substances that dissociate in aqueous solutions to give H+ (hydrogen ions or protons), while bases are defined as substances that dissociate in aqueous solutions to give OH− (hydroxide ions). In 1923, physical chemists Johannes Nicolaus Brønsted in Denmark and Thomas Martin Lowry in England both independently proposed the theory named after them. In the Brønsted–Lowry theory acids and bases are defined by the way they react with each other, generalising them. This is best illustrated by an equilibrium equation. acid + base ⇌ conjugate base + conjugate acid. With an acid, HA, the equation can be written symbolically as: HA + B <=> A- + HB+ The equilibrium sign, ⇌, is used because the reaction can occur in both forward and backward directions (is reversible). The acid, HA, is a proton donor which can lose a proton to become its conjugate base, A−. The base, B, is a proton acceptor which can become its conjugate acid, HB+. Most acid–base reactions are fast, so the substances in the reaction are usually in dynamic equilibrium with each other. Aqueous solutions Consider the following acid–base reaction: CH3 COOH + H2O <=> CH3 COO- + H3O+ Acetic acid, , is an acid because it donates a proton to water and becomes its conjugate base, the acetate ion. is a base because it accepts a proton from and becomes its conjugate acid, the hydronium ion,. The reverse of an acid–base reaction is also an acid–base reaction, between the conjugate acid of the base in the first reaction and the conjugate base of the acid. In the above example, ethanoate is the base of the reverse reaction and hydronium ion is the acid. H3O+ + CH3 COO- <=> CH3COOH + H2O One feature of the Brønsted–Lowry theory in contrast to Arrhenius theory is that it does not require an acid to dissociate. Amphoteric substances The essence of Brønsted–Lowry theory is that an acid is only such in relation to a base, and vice versa. Water is amphoteric as it can act as an acid or as a base. In the image shown at the right one molecule of acts as a base and gains to become while the other acts as an acid and loses to become . Another example is illustrated by substances like aluminium hydroxide, . \overset{(acid)}{Al(OH)3}{} + OH- <=> Al(OH)4^- 3H+{} + \overset{(base)}{Al(OH)3} <=> 3H2O{} + Al_{(aq)}^3+ Non-aqueous solutions The hydrogen ion, or hydronium ion, is a Brønsted–Lowry acid when dissolved in H2O and the hydroxide ion is a base because of the autoionization of water reaction H2O + H2O <=> H3O+ + OH- An analogous reaction occurs in liquid ammonia NH3 + NH3 <=> NH4+ + NH2- Thus, the ammonium ion, , in liquid ammonia corresponds to the hydronium ion in water and the amide ion, in ammonia, to the hydroxide ion in water. Ammonium salts behave as acids, and metal amides behave as bases. Some non-aqueous solvents can behave as bases, i.e. accept protons, in relation to Brønsted–Lowry acids. HA + S <=> A- + SH+ where S stands for a solvent molecule. The most important of such solvents are dimethylsulfoxide, DMSO, and acetonitrile, , as these solvents have been widely used to measure the acid dissociation constants of carbon-containing molecules. Because DMSO accepts protons more strongly than the acid becomes stronger in this solvent than in water. Indeed, many molecules behave as acids in non-aqueous solutions but not in aqueous solutions. An extreme case occurs with carbon acids, where a proton is extracted from a bond. Some non-aqueous solvents can behave as acids. An acidic solvent will make dissolved substances more basic. For example, the compound is known as acetic acid since it behaves as an acid in water. However, it behaves as a base in liquid hydrogen fluoride, a much more acidic solvent. CH3COOH + 2HF <=> CH3C(OH)2+ + HF2- Comparison with Lewis acid–base theory In the same year that Brønsted and Lowry published their theory, G. N. Lewis created an alternative theory of acid–base reactions. The Lewis theory is based on electronic structure. A Lewis base is a compound that can give an electron pair to a Lewis acid, a compound that can accept an electron pair. Lewis's proposal explains the Brønsted–Lowry classification using electronic structure. HA + B <=> A- + BH+ In this representation both the base, B, and the conjugate base, A−, are shown carrying a lone pair of electrons and the proton, which is a Lewis acid, is transferred between them. Lewis later wrote "To restrict the group of acids to those substances that contain hydrogen interferes as seriously with the systematic understanding of chemistry as would the restriction of the term oxidizing agent to substances containing oxygen." In Lewis theory an acid, A, and a base, B, form an adduct, AB, where the electron pair forms a dative covalent bond between A and B. This is shown when the adduct H3N−BF3 forms from ammonia and boron trifluoride, a reaction that cannot occur in water because boron trifluoride hydrolizes in water. 4BF3 + 3H2O -> B(OH)3 + 3HBF4 The reaction above illustrate that BF3 is an acid in both Lewis and Brønsted–Lowry classifications and show that both theories agree with each other. Boric acid is recognised as a Lewis acid because of the reaction B(OH)3 + H2O <=> B(OH)4- + H+ In this case the acid does not split up but the base, H2O, does. A solution of B(OH)3 is acidic because hydrogen ions are given off in this reaction. There is strong evidence that dilute aqueous solutions of ammonia contain minute amounts of the ammonium ion H2O + NH3 -> OH- + NH+4 and that, when dissolved in water, ammonia functions as a Lewis base. Comparison with the Lux–Flood theory The reactions between oxides in the solid or liquid states are excluded in the Brønsted–Lowry theory. For example, the reaction 2MgO + SiO2 -> Mg2 SiO4 is not covered in the Brønsted–Lowry definition of acids and bases. On the other hand, magnesium oxide acts as a base when it reacts with an aqueous solution of an acid. 2H+ + MgO(s) -> Mg^{2+}(aq) + H2O Dissolved silicon dioxide, SiO2, has been predicted to be a weak acid in the Brønsted–Lowry sense. SiO2(s) + 2H2O <=> Si(OH)4 (solution) Si(OH)4 <=> Si(OH)3O- + H+ According to the Lux–Flood theory, oxides like MgO and SiO2 in the solid state may be called acids or bases. For example, the mineral olivine may be known as a compound of a basic oxide, MgO, and silicon dioxide, SiO2, as an acidic oxide. This is important in geochemistry. References Bibliography Acid–base chemistry Equilibrium chemistry
0.77555
0.994558
0.77133
Psychometrics
Psychometrics is a field of study within psychology concerned with the theory and technique of measurement. Psychometrics generally covers specialized fields within psychology and education devoted to testing, measurement, assessment, and related activities. Psychometrics is concerned with the objective measurement of latent constructs that cannot be directly observed. Examples of latent constructs include intelligence, introversion, mental disorders, and educational achievement. The levels of individuals on nonobservable latent variables are inferred through mathematical modeling based on what is observed from individuals' responses to items on tests and scales. Practitioners are described as psychometricians, although not all who engage in psychometric research go by this title. Psychometricians usually possess specific qualifications, such as degrees or certifications, and most are psychologists with advanced graduate training in psychometrics and measurement theory. In addition to traditional academic institutions, practitioners also work for organizations such as the Educational Testing Service and Psychological Corporation. Some psychometric researchers focus on the construction and validation of assessment instruments, including surveys, scales, and open- or close-ended questionnaires. Others focus on research relating to measurement theory (e.g., item response theory, intraclass correlation) or specialize as learning and development professionals. Historical foundation Psychological testing has come from two streams of thought: the first, from Darwin, Galton, and Cattell, on the measurement of individual differences and the second, from Herbart, Weber, Fechner, and Wundt and their psychophysical measurements of a similar construct. The second set of individuals and their research is what has led to the development of experimental psychology and standardized testing. Victorian stream Charles Darwin was the inspiration behind Francis Galton, a scientist who advanced the development of psychometrics. In 1859, Darwin published his book On the Origin of Species. Darwin described the role of natural selection in the emergence, over time, of different populations of species of plants and animals. The book showed how individual members of a species differ among themselves and how they possess characteristics that are more or less adaptive to their environment. Those with more adaptive characteristics are more likely to survive to procreate and give rise to another generation. Those with less adaptive characteristics are less likely. These ideas stimulated Galton's interest in the study of human beings and how they differ one from another and how to measure those differences. Galton wrote a book entitled Hereditary Genius which was first published in 1869. The book described different characteristics that people possess and how those characteristics make some more "fit" than others. Today these differences, such as sensory and motor functioning (reaction time, visual acuity, and physical strength), are important domains of scientific psychology. Much of the early theoretical and applied work in psychometrics was undertaken in an attempt to measure intelligence. Galton often referred to as "the father of psychometrics," devised and included mental tests among his anthropometric measures. James McKeen Cattell, a pioneer in the field of psychometrics, went on to extend Galton's work. Cattell coined the term mental test, and is responsible for research and knowledge that ultimately led to the development of modern tests. German stream The origin of psychometrics also has connections to the related field of psychophysics. Around the same time that Darwin, Galton, and Cattell were making their discoveries, Herbart was also interested in "unlocking the mysteries of human consciousness" through the scientific method. Herbart was responsible for creating mathematical models of the mind, which were influential in educational practices for years to come. E.H. Weber built upon Herbart's work and tried to prove the existence of a psychological threshold, saying that a minimum stimulus was necessary to activate a sensory system. After Weber, G.T. Fechner expanded upon the knowledge he gleaned from Herbart and Weber, to devise the law that the strength of a sensation grows as the logarithm of the stimulus intensity. A follower of Weber and Fechner, Wilhelm Wundt is credited with founding the science of psychology. It is Wundt's influence that paved the way for others to develop psychological testing. 20th century In 1936, the psychometrician L. L. Thurstone, founder and first president of the Psychometric Society, developed and applied a theoretical approach to measurement referred to as the law of comparative judgment, an approach that has close connections to the psychophysical theory of Ernst Heinrich Weber and Gustav Fechner. In addition, Spearman and Thurstone both made important contributions to the theory and application of factor analysis, a statistical method developed and used extensively in psychometrics. In the late 1950s, Leopold Szondi made a historical and epistemological assessment of the impact of statistical thinking on psychology during previous few decades: "in the last decades, the specifically psychological thinking has been almost completely suppressed and removed, and replaced by a statistical thinking. Precisely here we see the cancer of testology and testomania of today." More recently, psychometric theory has been applied in the measurement of personality, attitudes, and beliefs, and academic achievement. These latent constructs cannot truly be measured, and much of the research and science in this discipline has been developed in an attempt to measure these constructs as close to the true score as possible. Figures who made significant contributions to psychometrics include Karl Pearson, Henry F. Kaiser, Carl Brigham, L. L. Thurstone, E. L. Thorndike, Georg Rasch, Eugene Galanter, Johnson O'Connor, Frederic M. Lord, Ledyard R Tucker, Louis Guttman, and Jane Loevinger. Definition of measurement in the social sciences The definition of measurement in the social sciences has a long history. A current widespread definition, proposed by Stanley Smith Stevens, is that measurement is "the assignment of numerals to objects or events according to some rule." This definition was introduced in a 1946 Science article in which Stevens proposed four levels of measurement. Although widely adopted, this definition differs in important respects from the more classical definition of measurement adopted in the physical sciences, namely that scientific measurement entails "the estimation or discovery of the ratio of some magnitude of a quantitative attribute to a unit of the same attribute" (p. 358) Indeed, Stevens's definition of measurement was put forward in response to the British Ferguson Committee, whose chair, A. Ferguson, was a physicist. The committee was appointed in 1932 by the British Association for the Advancement of Science to investigate the possibility of quantitatively estimating sensory events. Although its chair and other members were physicists, the committee also included several psychologists. The committee's report highlighted the importance of the definition of measurement. While Stevens's response was to propose a new definition, which has had considerable influence in the field, this was by no means the only response to the report. Another, notably different, response was to accept the classical definition, as reflected in the following statement: Measurement in psychology and physics are in no sense different. Physicists can measure when they can find the operations by which they may meet the necessary criteria; psychologists have to do the same. They need not worry about the mysterious differences between the meaning of measurement in the two sciences (Reese, 1943, p. 49). These divergent responses are reflected in alternative approaches to measurement. For example, methods based on covariance matrices are typically employed on the premise that numbers, such as raw scores derived from assessments, are measurements. Such approaches implicitly entail Stevens's definition of measurement, which requires only that numbers are assigned according to some rule. The main research task, then, is generally considered to be the discovery of associations between scores, and of factors posited to underlie such associations. On the other hand, when measurement models such as the Rasch model are employed, numbers are not assigned based on a rule. Instead, in keeping with Reese's statement above, specific criteria for measurement are stated, and the goal is to construct procedures or operations that provide data that meet the relevant criteria. Measurements are estimated based on the models, and tests are conducted to ascertain whether the relevant criteria have been met. Instruments and procedures The first psychometric instruments were designed to measure intelligence. One early approach to measuring intelligence was the test developed in France by Alfred Binet and Theodore Simon. That test was known as the .The French test was adapted for use in the U. S. by Lewis Terman of Stanford University, and named the Stanford-Binet IQ test. Another major focus in psychometrics has been on personality testing. There has been a range of theoretical approaches to conceptualizing and measuring personality, though there is no widely agreed upon theory. Some of the better-known instruments include the Minnesota Multiphasic Personality Inventory, the Five-Factor Model (or "Big 5") and tools such as Personality and Preference Inventory and the Myers–Briggs Type Indicator. Attitudes have also been studied extensively using psychometric approaches. An alternative method involves the application of unfolding measurement models, the most general being the Hyperbolic Cosine Model (Andrich & Luo, 1993). Theoretical approaches Psychometricians have developed a number of different measurement theories. These include classical test theory (CTT) and item response theory (IRT). An approach that seems mathematically to be similar to IRT but also quite distinctive, in terms of its origins and features, is represented by the Rasch model for measurement. The development of the Rasch model, and the broader class of models to which it belongs, was explicitly founded on requirements of measurement in the physical sciences. Psychometricians have also developed methods for working with large matrices of correlations and covariances. Techniques in this general tradition include: factor analysis, a method of determining the underlying dimensions of data. One of the main challenges faced by users of factor analysis is a lack of consensus on appropriate procedures for determining the number of latent factors. A usual procedure is to stop factoring when eigenvalues drop below one because the original sphere shrinks. The lack of the cutting points concerns other multivariate methods, also. Multidimensional scaling is a method for finding a simple representation for data with a large number of latent dimensions. Cluster analysis is an approach to finding objects that are like each other. Factor analysis, multidimensional scaling, and cluster analysis are all multivariate descriptive methods used to distill from large amounts of data simpler structures. More recently, structural equation modeling and path analysis represent more sophisticated approaches to working with large covariance matrices. These methods allow statistically sophisticated models to be fitted to data and tested to determine if they are adequate fits. Because at a granular level psychometric research is concerned with the extent and nature of multidimensionality in each of the items of interest, a relatively new procedure known as bi-factor analysis can be helpful. Bi-factor analysis can decompose "an item's systematic variance in terms of, ideally, two sources, a general factor and one source of additional systematic variance." Key concepts Key concepts in classical test theory are reliability and validity. A reliable measure is one that measures a construct consistently across time, individuals, and situations. A valid measure is one that measures what it is intended to measure. Reliability is necessary, but not sufficient, for validity. Both reliability and validity can be assessed statistically. Consistency over repeated measures of the same test can be assessed with the Pearson correlation coefficient, and is often called test-retest reliability. Similarly, the equivalence of different versions of the same measure can be indexed by a Pearson correlation, and is called equivalent forms reliability or a similar term. Internal consistency, which addresses the homogeneity of a single test form, may be assessed by correlating performance on two halves of a test, which is termed split-half reliability; the value of this Pearson product-moment correlation coefficient for two half-tests is adjusted with the Spearman–Brown prediction formula to correspond to the correlation between two full-length tests. Perhaps the most commonly used index of reliability is Cronbach's α, which is equivalent to the mean of all possible split-half coefficients. Other approaches include the intra-class correlation, which is the ratio of variance of measurements of a given target to the variance of all targets. There are a number of different forms of validity. Criterion-related validity refers to the extent to which a test or scale predicts a sample of behavior, i.e., the criterion, that is "external to the measuring instrument itself." That external sample of behavior can be many things including another test; college grade point average as when the high school SAT is used to predict performance in college; and even behavior that occurred in the past, for example, when a test of current psychological symptoms is used to predict the occurrence of past victimization (which would accurately represent postdiction). When the criterion measure is collected at the same time as the measure being validated the goal is to establish concurrent validity; when the criterion is collected later the goal is to establish predictive validity. A measure has construct validity if it is related to measures of other constructs as required by theory. Content validity is a demonstration that the items of a test do an adequate job of covering the domain being measured. In a personnel selection example, test content is based on a defined statement or set of statements of knowledge, skill, ability, or other characteristics obtained from a job analysis. Item response theory models the relationship between latent traits and responses to test items. Among other advantages, IRT provides a basis for obtaining an estimate of the location of a test-taker on a given latent trait as well as the standard error of measurement of that location. For example, a university student's knowledge of history can be deduced from his or her score on a university test and then be compared reliably with a high school student's knowledge deduced from a less difficult test. Scores derived by classical test theory do not have this characteristic, and assessment of actual ability (rather than ability relative to other test-takers) must be assessed by comparing scores to those of a "norm group" randomly selected from the population. In fact, all measures derived from classical test theory are dependent on the sample tested, while, in principle, those derived from item response theory are not. Standards of quality The considerations of validity and reliability typically are viewed as essential elements for determining the quality of any test. However, professional and practitioner associations frequently have placed these concerns within broader contexts when developing standards and making overall judgments about the quality of any test as a whole within a given context. A consideration of concern in many applied research settings is whether or not the metric of a given psychological inventory is meaningful or arbitrary. Testing standards In 2014, the American Educational Research Association (AERA), American Psychological Association (APA), and National Council on Measurement in Education (NCME) published a revision of the Standards for Educational and Psychological Testing, which describes standards for test development, evaluation, and use. The Standards cover essential topics in testing including validity, reliability/errors of measurement, and fairness in testing. The book also establishes standards related to testing operations including test design and development, scores, scales, norms, score linking, cut scores, test administration, scoring, reporting, score interpretation, test documentation, and rights and responsibilities of test takers and test users. Finally, the Standards cover topics related to testing applications, including psychological testing and assessment, workplace testing and credentialing, educational testing and assessment, and testing in program evaluation and public policy. Evaluation standards In the field of evaluation, and in particular educational evaluation, the Joint Committee on Standards for Educational Evaluation has published three sets of standards for evaluations. The Personnel Evaluation Standards was published in 1988, The Program Evaluation Standards (2nd edition) was published in 1994, and The Student Evaluation Standards was published in 2003. Each publication presents and elaborates a set of standards for use in a variety of educational settings. The standards provide guidelines for designing, implementing, assessing, and improving the identified form of evaluation. Each of the standards has been placed in one of four fundamental categories to promote educational evaluations that are proper, useful, feasible, and accurate. In these sets of standards, validity and reliability considerations are covered under the accuracy topic. For example, the student accuracy standards help ensure that student evaluations will provide sound, accurate, and credible information about student learning and performance. Controversy and criticism Because psychometrics is based on latent psychological processes measured through correlations, there has been controversy about some psychometric measures. Critics, including practitioners in the physical sciences, have argued that such definition and quantification is difficult, and that such measurements are often misused by laymen, such as with personality tests used in employment procedures. The Standards for Educational and Psychological Measurement gives the following statement on test validity: "validity refers to the degree to which evidence and theory support the interpretations of test scores entailed by proposed uses of tests". Simply put, a test is not valid unless it is used and interpreted in the way it is intended. Two types of tools used to measure personality traits are objective tests and projective measures. Examples of such tests are the: Big Five Inventory (BFI), Minnesota Multiphasic Personality Inventory (MMPI-2), Rorschach Inkblot test, Neurotic Personality Questionnaire KON-2006, or Eysenck Personality Questionnaire. Some of these tests are helpful because they have adequate reliability and validity, two factors that make tests consistent and accurate reflections of the underlying construct. The Myers–Briggs Type Indicator (MBTI), however, has questionable validity and has been the subject of much criticism. Psychometric specialist Robert Hogan wrote of the measure: "Most personality psychologists regard the MBTI as little more than an elaborate Chinese fortune cookie." Lee Cronbach noted in American Psychologist (1957) that, "correlational psychology, though fully as old as experimentation, was slower to mature. It qualifies equally as a discipline, however, because it asks a distinctive type of question and has technical methods of examining whether the question has been properly put and the data properly interpreted." He would go on to say, "The correlation method, for its part, can study what man has not learned to control or can never hope to control ... A true federation of the disciplines is required. Kept independent, they can give only wrong answers or no answers at all regarding certain important problems." Non-human: animals and machines Psychometrics addresses human abilities, attitudes, traits, and educational evolution. Notably, the study of behavior, mental processes, and abilities of non-human animals is usually addressed by comparative psychology, or with a continuum between non-human animals and the rest of animals by evolutionary psychology. Nonetheless, there are some advocators for a more gradual transition between the approach taken for humans and the approach taken for (non-human) animals. The evaluation of abilities, traits and learning evolution of machines has been mostly unrelated to the case of humans and non-human animals, with specific approaches in the area of artificial intelligence. A more integrated approach, under the name of universal psychometrics, has also been proposed. See also References Bibliography Michell, J. (1999). Measurement in Psychology. Cambridge: Cambridge University Press. Rasch, G. (1960/1980). Probabilistic models for some intelligence and attainment tests. Copenhagen, Danish Institute for Educational Research), expanded edition (1980) with foreword and afterword by B.D. Wright. Chicago: The University of Chicago Press. Reese, T.W. (1943). The application of the theory of physical measurement to the measurement of psychological magnitudes, with three experimental examples. Psychological Monographs, 55, 1–89. Thurstone, L.L. (1929). The Measurement of Psychological Value. In T.V. Smith and W.K. Wright (Eds.), Essays in Philosophy by Seventeen Doctors of Philosophy of the University of Chicago. Chicago: Open Court. Thurstone, L.L. (1959). The Measurement of Values. Chicago: The University of Chicago Press. Further reading External links APA Standards for Educational and Psychological Testing International Personality Item Pool Joint Committee on Standards for Educational Evaluation The Psychometrics Centre, University of Cambridge Psychometric Society and Psychometrika homepage London Psychometric Laboratory Applied psychology Educational research Psychological testing Metrics Educational assessment and evaluation
0.773628
0.996986
0.771297
PEST analysis
In business analysis, PEST analysis (political, economic, social and technological) is a framework of external macro-environmental factors used in strategic management and market research. PEST analysis was developed in 1967 by Francis Aguilar as an environmental scanning framework for businesses to understand the external conditions and relations of a business in order to assist managers in strategic planning. It has also been termed ETPS analysis. PEST analyses give an overview of the different macro-environmental factors to be considered by a business, indicating market growth or decline, business position, as well as the potential of and direction for operations. Components The basic PEST analysis includes four factors: political, economic, social, and technological. Political Political factors relate to how the governments intervene in economies. Specifically, political factors comprise areas including tax policy, labour law, environmental law, trade restrictions, tariffs, and political stability. Other factors include what are considered merit goods and demerit goods by a government, and the impact of governments on health, education, and infrastructure of a nation. Economic Economic factors include economic growth, exchange rates, inflation rate, and interest rates. Social Social factors include cultural aspects and health consciousness, population growth rate, age distribution, career attitudes and safety emphasis. Trends in social factors affect the demand for a company's products and how that company operates. Through analysis of social factors, companies may adopt various management strategies to adapt to social trends. Technological Technological factors include R&D activity, automation, technology incentives and the rate of technological change. These can determine barriers to entry, minimum efficient production level and influence the outsourcing decisions. Technological shifts would also affect costs, quality, and innovation. Variants Many similar frameworks have been constructed, with the addition of other components such as environment and law. These include PESTLE, PMESII-PT, STEPE, STEEP, STEEPLE, STEER, and TELOS. Legal and regulatory Legal factors include discrimination law, consumer law, antitrust law, employment law, and health and safety law, which can affect how a company operates, its costs, and the demand for its products. Regulatory factors have also been analysed as its own pillar. Environment Environmental factors include ecological and environmental aspects such as weather, climate, and climate change, which may especially affect industries such as tourism, farming, and insurance. Environmental analyses often use the PESTLE framework, which allow for the evaluation of factors affecting management decisions for coastal zone and freshwater resources, development of sustainable buildings, sustainable energy solutions, and transportation. Demographic Demographic factors have been considered in frameworks such as STEEPLED. Factors include gender, age, ethnicity, knowledge of languages, disabilities, mobility, home ownership, employment status, religious belief or practice, culture and tradition, living standards and income level. Military Military analyses have used the PMESII-PT framework, which considers political, military, economic, social, information, infrastructure, physical environment and time aspects in a military context. Operational The TELOS framework explores technical, economic, legal, operational, and scheduling factors. Limitations PEST analysis can be helpful to explain market changes in the past, but it is not always suitable to predict or foresee upcoming market changes. See also Enterprise planning systems Macromarketing SWOT analysis VRIO References Strategic management Management theory Analysis
0.77464
0.995662
0.771279
Carbon cycle
The carbon cycle is that part of the biogeochemical cycle by which carbon is exchanged among the biosphere, pedosphere, geosphere, hydrosphere, and atmosphere of Earth. Other major biogeochemical cycles include the nitrogen cycle and the water cycle. Carbon is the main component of biological compounds as well as a major component of many rocks such as limestone. The carbon cycle comprises a sequence of events that are key to making Earth capable of sustaining life. It describes the movement of carbon as it is recycled and reused throughout the biosphere, as well as long-term processes of carbon sequestration (storage) to and release from carbon sinks. To describe the dynamics of the carbon cycle, a distinction can be made between the fast and slow carbon cycle. The fast cycle is also referred to as the biological carbon cycle. Fast cycles can complete within years, moving substances from atmosphere to biosphere, then back to the atmosphere. Slow or geological cycles (also called deep carbon cycle) can take millions of years to complete, moving substances through the Earth's crust between rocks, soil, ocean and atmosphere. Humans have disturbed the carbon cycle for many centuries. They have done so by modifying land use and by mining and burning carbon from ancient organic remains (coal, petroleum and gas). Carbon dioxide in the atmosphere has increased nearly 52% over pre-industrial levels by 2020, resulting in global warming. The increased carbon dioxide has also caused a reduction in the ocean's pH value and is fundamentally altering marine chemistry. Carbon dioxide is critical for photosynthesis. Main compartments The carbon cycle was first described by Antoine Lavoisier and Joseph Priestley, and popularised by Humphry Davy. The global carbon cycle is now usually divided into the following major reservoirs of carbon (also called carbon pools) interconnected by pathways of exchange: Atmosphere Terrestrial biosphere Ocean, including dissolved inorganic carbon and living and non-living marine biota Sediments, including fossil fuels, freshwater systems, and non-living organic material. Earth's interior (mantle and crust). These carbon stores interact with the other components through geological processes. The carbon exchanges between reservoirs occur as the result of various chemical, physical, geological, and biological processes. The ocean contains the largest active pool of carbon near the surface of the Earth. The natural flows of carbon between the atmosphere, ocean, terrestrial ecosystems, and sediments are fairly balanced; so carbon levels would be roughly stable without human influence. Atmosphere Carbon in the Earth's atmosphere exists in two main forms: carbon dioxide and methane. Both of these gases absorb and retain heat in the atmosphere and are partially responsible for the greenhouse effect. Methane produces a larger greenhouse effect per volume as compared to carbon dioxide, but it exists in much lower concentrations and is more short-lived than carbon dioxide. Thus, carbon dioxide contributes more to the global greenhouse effect than methane. Carbon dioxide is removed from the atmosphere primarily through photosynthesis and enters the terrestrial and oceanic biospheres. Carbon dioxide also dissolves directly from the atmosphere into bodies of water (ocean, lakes, etc.), as well as dissolving in precipitation as raindrops fall through the atmosphere. When dissolved in water, carbon dioxide reacts with water molecules and forms carbonic acid, which contributes to ocean acidity. It can then be absorbed by rocks through weathering. It also can acidify other surfaces it touches or be washed into the ocean. Human activities over the past two centuries have increased the amount of carbon in the atmosphere by nearly 50% as of year 2020, mainly in the form of carbon dioxide, both by modifying ecosystems' ability to extract carbon dioxide from the atmosphere and by emitting it directly, e.g., by burning fossil fuels and manufacturing concrete. In the far future (2 to 3 billion years), the rate at which carbon dioxide is absorbed into the soil via the carbonate–silicate cycle will likely increase due to expected changes in the sun as it ages. The expected increased luminosity of the Sun will likely speed up the rate of surface weathering. This will eventually cause most of the carbon dioxide in the atmosphere to be squelched into the Earth's crust as carbonate. Once the concentration of carbon dioxide in the atmosphere falls below approximately 50 parts per million (tolerances vary among species), C3 photosynthesis will no longer be possible. This has been predicted to occur 600 million years from the present, though models vary. Once the oceans on the Earth evaporate in about 1.1 billion years from now, plate tectonics will very likely stop due to the lack of water to lubricate them. The lack of volcanoes pumping out carbon dioxide will cause the carbon cycle to end between 1 billion and 2 billion years into the future. Terrestrial biosphere The terrestrial biosphere includes the organic carbon in all land-living organisms, both alive and dead, as well as carbon stored in soils. About 500 gigatons of carbon are stored above ground in plants and other living organisms, while soil holds approximately 1,500 gigatons of carbon. Most carbon in the terrestrial biosphere is organic carbon, while about a third of soil carbon is stored in inorganic forms, such as calcium carbonate. Organic carbon is a major component of all organisms living on Earth. Autotrophs extract it from the air in the form of carbon dioxide, converting it to organic carbon, while heterotrophs receive carbon by consuming other organisms. Because carbon uptake in the terrestrial biosphere is dependent on biotic factors, it follows a diurnal and seasonal cycle. In CO2 measurements, this feature is apparent in the Keeling curve. It is strongest in the northern hemisphere because this hemisphere has more land mass than the southern hemisphere and thus more room for ecosystems to absorb and emit carbon. Carbon leaves the terrestrial biosphere in several ways and on different time scales. The combustion or respiration of organic carbon releases it rapidly into the atmosphere. It can also be exported into the ocean through rivers or remain sequestered in soils in the form of inert carbon. Carbon stored in soil can remain there for up to thousands of years before being washed into rivers by erosion or released into the atmosphere through soil respiration. Between 1989 and 2008 soil respiration increased by about 0.1% per year. In 2008, the global total of CO2 released by soil respiration was roughly 98 billion tonnes, about 3 times more carbon than humans are now putting into the atmosphere each year by burning fossil fuel (this does not represent a net transfer of carbon from soil to atmosphere, as the respiration is largely offset by inputs to soil carbon). There are a few plausible explanations for this trend, but the most likely explanation is that increasing temperatures have increased rates of decomposition of soil organic matter, which has increased the flow of CO2. The length of carbon sequestering in soil is dependent on local climatic conditions and thus changes in the course of climate change. Ocean The ocean can be conceptually divided into a surface layer within which water makes frequent (daily to annual) contact with the atmosphere, and a deep layer below the typical mixed layer depth of a few hundred meters or less, within which the time between consecutive contacts may be centuries. The dissolved inorganic carbon (DIC) in the surface layer is exchanged rapidly with the atmosphere, maintaining equilibrium. Partly because its concentration of DIC is about 15% higher but mainly due to its larger volume, the deep ocean contains far more carbon—it is the largest pool of actively cycled carbon in the world, containing 50 times more than the atmosphere—but the timescale to reach equilibrium with the atmosphere is hundreds of years: the exchange of carbon between the two layers, driven by thermohaline circulation, is slow. Carbon enters the ocean mainly through the dissolution of atmospheric carbon dioxide, a small fraction of which is converted into carbonate. It can also enter the ocean through rivers as dissolved organic carbon. It is converted by organisms into organic carbon through photosynthesis and can either be exchanged throughout the food chain or precipitated into the oceans' deeper, more carbon-rich layers as dead soft tissue or in shells as calcium carbonate. It circulates in this layer for long periods of time before either being deposited as sediment or, eventually, returned to the surface waters through thermohaline circulation. Oceans are basic (with a current pH value of 8.1 to 8.2). The increase in atmospheric CO2 shifts the pH of the ocean towards neutral in a process called ocean acidification. Oceanic absorption of CO2 is one of the most important forms of carbon sequestering. The projected rate of pH reduction could slow the biological precipitation of calcium carbonates, thus decreasing the ocean's capacity to absorb CO2. Geosphere The geologic component of the carbon cycle operates slowly in comparison to the other parts of the global carbon cycle. It is one of the most important determinants of the amount of carbon in the atmosphere, and thus of global temperatures. Most of the Earth's carbon is stored inertly in the Earth's lithosphere. Much of the carbon stored in the Earth's mantle was stored there when the Earth formed. Some of it was deposited in the form of organic carbon from the biosphere. Of the carbon stored in the geosphere, about 80% is limestone and its derivatives, which form from the sedimentation of calcium carbonate stored in the shells of marine organisms. The remaining 20% is stored as kerogens formed through the sedimentation and burial of terrestrial organisms under high heat and pressure. Organic carbon stored in the geosphere can remain there for millions of years. Carbon can leave the geosphere in several ways. Carbon dioxide is released during the metamorphism of carbonate rocks when they are subducted into the Earth's mantle. This carbon dioxide can be released into the atmosphere and ocean through volcanoes and hotspots. It can also be removed by humans through the direct extraction of kerogens in the form of fossil fuels. After extraction, fossil fuels are burned to release energy and emit the carbon they store into the atmosphere. Types of dynamic There is a fast and a slow carbon cycle. The fast cycle operates in the biosphere and the slow cycle operates in rocks. The fast or biological cycle can complete within years, moving carbon from atmosphere to biosphere, then back to the atmosphere. The slow or geological cycle may extend deep into the mantle and can take millions of years to complete, moving carbon through the Earth's crust between rocks, soil, ocean and atmosphere. The fast carbon cycle involves relatively short-term biogeochemical processes between the environment and living organisms in the biosphere (see diagram at start of article). It includes movements of carbon between the atmosphere and terrestrial and marine ecosystems, as well as soils and seafloor sediments. The fast cycle includes annual cycles involving photosynthesis and decadal cycles involving vegetative growth and decomposition. The reactions of the fast carbon cycle to human activities will determine many of the more immediate impacts of climate change. The slow (or deep) carbon cycle involves medium to long-term geochemical processes belonging to the rock cycle (see diagram on the right). The exchange between the ocean and atmosphere can take centuries, and the weathering of rocks can take millions of years. Carbon in the ocean precipitates to the ocean floor where it can form sedimentary rock and be subducted into the Earth's mantle. Mountain building processes result in the return of this geologic carbon to the Earth's surface. There the rocks are weathered and carbon is returned to the atmosphere by degassing and to the ocean by rivers. Other geologic carbon returns to the ocean through the hydrothermal emission of calcium ions. In a given year between 10 and 100 million tonnes of carbon moves around this slow cycle. This includes volcanoes returning geologic carbon directly to the atmosphere in the form of carbon dioxide. However, this is less than one percent of the carbon dioxide put into the atmosphere by burning fossil fuels. Processes within fast carbon cycle Terrestrial carbon in the water cycle The movement of terrestrial carbon in the water cycle is shown in the diagram on the right and explained below: Atmospheric particles act as cloud condensation nuclei, promoting cloud formation. Raindrops absorb organic and inorganic carbon through particle scavenging and adsorption of organic vapors while falling toward Earth. Burning and volcanic eruptions produce highly condensed polycyclic aromatic molecules (i.e. black carbon) that is returned to the atmosphere along with greenhouse gases such as CO2. Terrestrial plants fix atmospheric CO2 through photosynthesis, returning a fraction back to the atmosphere through respiration. Lignin and celluloses represent as much as 80% of the organic carbon in forests and 60% in pastures. Litterfall and root organic carbon mix with sedimentary material to form organic soils where plant-derived and petrogenic organic carbon is both stored and transformed by microbial and fungal activity. Water absorbs plant and settled aerosol-derived dissolved organic carbon (DOC) and dissolved inorganic carbon (DIC) as it passes over forest canopies (i.e. throughfall) and along plant trunks/stems (i.e. stemflow). Biogeochemical transformations take place as water soaks into soil solution and groundwater reservoirs and overland flow occurs when soils are completely saturated, or rainfall occurs more rapidly than saturation into soils. Organic carbon derived from the terrestrial biosphere and in situ primary production is decomposed by microbial communities in rivers and streams along with physical decomposition (i.e. photo-oxidation), resulting in a flux of CO2 from rivers to the atmosphere that are the same order of magnitude as the amount of carbon sequestered annually by the terrestrial biosphere. Terrestrially-derived macromolecules such as lignin and black carbon are decomposed into smaller components and monomers, ultimately being converted to CO2, metabolic intermediates, or biomass. Lakes, reservoirs, and floodplains typically store large amounts of organic carbon and sediments, but also experience net heterotrophy in the water column, resulting in a net flux of CO2 to the atmosphere that is roughly one order of magnitude less than rivers. Methane production is also typically high in the anoxic sediments of floodplains, lakes, and reservoirs. Primary production is typically enhanced in river plumes due to the export of fluvial nutrients. Nevertheless, estuarine waters are a source of CO2 to the atmosphere, globally. Coastal marshes both store and export blue carbon. Marshes and wetlands are suggested to have an equivalent flux of CO2 to the atmosphere as rivers, globally. Continental shelves and the open ocean typically absorb CO2 from the atmosphere. The marine biological pump sequesters a small but significant fraction of the absorbed CO2 as organic carbon in marine sediments (see below). Terrestrial runoff to the ocean Terrestrial and marine ecosystems are chiefly connected through riverine transport, which acts as the main channel through which erosive terrestrially derived substances enter into oceanic systems. Material and energy exchanges between the terrestrial biosphere and the lithosphere as well as organic carbon fixation and oxidation processes together regulate ecosystem carbon and dioxygen (O2) pools. Riverine transport, being the main connective channel of these pools, will act to transport net primary productivity (primarily in the form of dissolved organic carbon (DOC) and particulate organic carbon (POC)) from terrestrial to oceanic systems. During transport, part of DOC will rapidly return to the atmosphere through redox reactions, causing "carbon degassing" to occur between land-atmosphere storage layers. The remaining DOC and dissolved inorganic carbon (DIC) are also exported to the ocean. In 2015, inorganic and organic carbon export fluxes from global rivers were assessed as 0.50–0.70 Pg C y−1 and 0.15–0.35 Pg C y−1 respectively. On the other hand, POC can remain buried in sediment over an extensive period, and the annual global terrestrial to oceanic POC flux has been estimated at 0.20 (+0.13,-0.07) Gg C y−1. Biological pump in the ocean The ocean biological pump is the ocean's biologically driven sequestration of carbon from the atmosphere and land runoff to the deep ocean interior and seafloor sediments. The biological pump is not so much the result of a single process, but rather the sum of a number of processes each of which can influence biological pumping. The pump transfers about 11 billion tonnes of carbon every year into the ocean's interior. An ocean without the biological pump would result in atmospheric CO2 levels about 400 ppm higher than the present day. Most carbon incorporated in organic and inorganic biological matter is formed at the sea surface where it can then start sinking to the ocean floor. The deep ocean gets most of its nutrients from the higher water column when they sink down in the form of marine snow. This is made up of dead or dying animals and microbes, fecal matter, sand and other inorganic material. The biological pump is responsible for transforming dissolved inorganic carbon (DIC) into organic biomass and pumping it in particulate or dissolved form into the deep ocean. Inorganic nutrients and carbon dioxide are fixed during photosynthesis by phytoplankton, which both release dissolved organic matter (DOM) and are consumed by herbivorous zooplankton. Larger zooplankton - such as copepods, egest fecal pellets - which can be reingested, and sink or collect with other organic detritus into larger, more-rapidly-sinking aggregates. DOM is partially consumed by bacteria and respired; the remaining refractory DOM is advected and mixed into the deep sea. DOM and aggregates exported into the deep water are consumed and respired, thus returning organic carbon into the enormous deep ocean reservoir of DIC. A single phytoplankton cell has a sinking rate around one metre per day. Given that the average depth of the ocean is about four kilometres, it can take over ten years for these cells to reach the ocean floor. However, through processes such as coagulation and expulsion in predator fecal pellets, these cells form aggregates. These aggregates have sinking rates orders of magnitude greater than individual cells and complete their journey to the deep in a matter of days. About 1% of the particles leaving the surface ocean reach the seabed and are consumed, respired, or buried in the sediments. The net effect of these processes is to remove carbon in organic form from the surface and return it to DIC at greater depths, maintaining a surface-to-deep ocean gradient of DIC. Thermohaline circulation returns deep-ocean DIC to the atmosphere on millennial timescales. The carbon buried in the sediments can be subducted into the earth's mantle and stored for millions of years as part of the slow carbon cycle (see next section). Processes within slow carbon cycle Slow or deep carbon cycling is an important process, though it is not as well-understood as the relatively fast carbon movement through the atmosphere, terrestrial biosphere, ocean, and geosphere. The deep carbon cycle is intimately connected to the movement of carbon in the Earth's surface and atmosphere. If the process did not exist, carbon would remain in the atmosphere, where it would accumulate to extremely high levels over long periods of time. Therefore, by allowing carbon to return to the Earth, the deep carbon cycle plays a critical role in maintaining the terrestrial conditions necessary for life to exist. Furthermore, the process is also significant simply due to the massive quantities of carbon it transports through the planet. In fact, studying the composition of basaltic magma and measuring carbon dioxide flux out of volcanoes reveals that the amount of carbon in the mantle is actually greater than that on the Earth's surface by a factor of one thousand. Drilling down and physically observing deep-Earth carbon processes is evidently extremely difficult, as the lower mantle and core extend from 660 to 2,891 km and 2,891 to 6,371  km deep into the Earth respectively. Accordingly, not much is conclusively known regarding the role of carbon in the deep Earth. Nonetheless, several pieces of evidence—many of which come from laboratory simulations of deep Earth conditions—have indicated mechanisms for the element's movement down into the lower mantle, as well as the forms that carbon takes at the extreme temperatures and pressures of said layer. Furthermore, techniques like seismology have led to a greater understanding of the potential presence of carbon in the Earth's core. Carbon in the lower mantle Carbon principally enters the mantle in the form of carbonate-rich sediments on tectonic plates of ocean crust, which pull the carbon into the mantle upon undergoing subduction. Not much is known about carbon circulation in the mantle, especially in the deep Earth, but many studies have attempted to augment our understanding of the element's movement and forms within the region. For instance, a 2011 study demonstrated that carbon cycling extends all the way to the lower mantle. The study analyzed rare, super-deep diamonds at a site in Juina, Brazil, determining that the bulk composition of some of the diamonds' inclusions matched the expected result of basalt melting and crystallisation under lower mantle temperatures and pressures. Thus, the investigation's findings indicate that pieces of basaltic oceanic lithosphere act as the principle transport mechanism for carbon to Earth's deep interior. These subducted carbonates can interact with lower mantle silicates, eventually forming super-deep diamonds like the one found. However, carbonates descending to the lower mantle encounter other fates in addition to forming diamonds. In 2011, carbonates were subjected to an environment similar to that of 1800 km deep into the Earth, well within the lower mantle. Doing so resulted in the formations of magnesite, siderite, and numerous varieties of graphite. Other experiments—as well as petrologic observations—support this claim, indicating that magnesite is actually the most stable carbonate phase in most part of the mantle. This is largely a result of its higher melting temperature. Consequently, scientists have concluded that carbonates undergo reduction as they descend into the mantle before being stabilised at depth by low oxygen fugacity environments. Magnesium, iron, and other metallic compounds act as buffers throughout the process. The presence of reduced, elemental forms of carbon like graphite would indicate that carbon compounds are reduced as they descend into the mantle. Polymorphism alters carbonate compounds' stability at different depths within the Earth. To illustrate, laboratory simulations and density functional theory calculations suggest that tetrahedrally coordinated carbonates are most stable at depths approaching the core–mantle boundary. A 2015 study indicates that the lower mantle's high pressure causes carbon bonds to transition from sp2 to sp3 hybridised orbitals, resulting in carbon tetrahedrally bonding to oxygen. CO3 trigonal groups cannot form polymerisable networks, while tetrahedral CO4 can, signifying an increase in carbon's coordination number, and therefore drastic changes in carbonate compounds' properties in the lower mantle. As an example, preliminary theoretical studies suggest that high pressure causes carbonate melt viscosity to increase; the melts' lower mobility as a result of its increased viscosity causes large deposits of carbon deep into the mantle. Accordingly, carbon can remain in the lower mantle for long periods of time, but large concentrations of carbon frequently find their way back to the lithosphere. This process, called carbon outgassing, is the result of carbonated mantle undergoing decompression melting, as well as mantle plumes carrying carbon compounds up towards the crust. Carbon is oxidised upon its ascent towards volcanic hotspots, where it is then released as CO2. This occurs so that the carbon atom matches the oxidation state of the basalts erupting in such areas. Carbon in the core Although the presence of carbon in the Earth's core is well-constrained, recent studies suggest large inventories of carbon could be stored in this region. Shear (S) waves moving through the inner core travel at about fifty percent of the velocity expected for most iron-rich alloys. Because the core's composition is believed to be an alloy of crystalline iron and a small amount of nickel, this seismic anomaly indicates the presence of light elements, including carbon, in the core. In fact, studies using diamond anvil cells to replicate the conditions in the Earth's core indicate that iron carbide (Fe7C3) matches the inner core's wave speed and density. Therefore, the iron carbide model could serve as an evidence that the core holds as much as 67% of the Earth's carbon. Furthermore, another study found that in the pressure and temperature condition of the Earth's inner core, carbon dissolved in iron and formed a stable phase with the same Fe7C3 composition—albeit with a different structure from the one previously mentioned. In summary, although the amount of carbon potentially stored in the Earth's core is not known, recent studies indicate that the presence of iron carbides can explain some of the geophysical observations. Viruses as regulators Viruses act as "regulators" of the global carbon cycle because they impact the material cycles and energy flows of food webs and the microbial loop. The average contribution of viruses to the Earth ecosystem carbon cycle is 8.6%, of which its contribution to marine ecosystems (1.4%) is less than its contribution to terrestrial (6.7%) and freshwater (17.8%) ecosystems. Over the past 2,000 years, anthropogenic activities and climate change have gradually altered the regulatory role of viruses in ecosystem carbon cycling processes. This has been particularly conspicuous over the past 200 years due to rapid industrialization and the attendant population growth. Human influence on fast carbon cycle Since the Industrial Revolution, and especially since the end of WWII, human activity has substantially disturbed the global carbon cycle by redistributing massive amounts of carbon from the geosphere. Humans have also continued to shift the natural component functions of the terrestrial biosphere with changes to vegetation and other land use. Man-made (synthetic) carbon compounds have been designed and mass-manufactured that will persist for decades to millennia in air, water, and sediments as pollutants. Climate change is amplifying and forcing further indirect human changes to the carbon cycle as a consequence of various positive and negative feedbacks. Climate change Current trends in climate change lead to higher ocean temperatures and acidity, thus modifying marine ecosystems. Also, acid rain and polluted runoff from agriculture and industry change the ocean's chemical composition. Such changes can have dramatic effects on highly sensitive ecosystems such as coral reefs, thus limiting the ocean's ability to absorb carbon from the atmosphere on a regional scale and reducing oceanic biodiversity globally. The exchanges of carbon between the atmosphere and other components of the Earth system, collectively known as the carbon cycle, currently constitute important negative (dampening) feedbacks on the effect of anthropogenic carbon emissions on climate change. Carbon sinks in the land and the ocean each currently take up about one-quarter of anthropogenic carbon emissions each year. These feedbacks are expected to weaken in the future, amplifying the effect of anthropogenic carbon emissions on climate change. The degree to which they will weaken, however, is highly uncertain, with Earth system models predicting a wide range of land and ocean carbon uptakes even under identical atmospheric concentration or emission scenarios. Arctic methane emissions indirectly caused by anthropogenic global warming also affect the carbon cycle and contribute to further warming. Fossil carbon extraction and burning The largest and one of the fastest growing human impacts on the carbon cycle and biosphere is the extraction and burning of fossil fuels, which directly transfer carbon from the geosphere into the atmosphere. Carbon dioxide is also produced and released during the calcination of limestone for clinker production. Clinker is an industrial precursor of cement. , about 450 gigatons of fossil carbon have been extracted in total; an amount approaching the carbon contained in all of Earth's living terrestrial biomass. Recent rates of global emissions directly into the atmosphere have exceeded the uptake by vegetation and the oceans. These sinks have been expected and observed to remove about half of the added atmospheric carbon within about a century. Nevertheless, sinks like the ocean have evolving saturation properties, and a substantial fraction (20–35%, based on coupled models) of the added carbon is projected to remain in the atmosphere for centuries to millennia. Halocarbons Halocarbons are less prolific compounds developed for diverse uses throughout industry; for example as solvents and refrigerants. Nevertheless, the buildup of relatively small concentrations (parts per trillion) of chlorofluorocarbon, hydrofluorocarbon, and perfluorocarbon gases in the atmosphere is responsible for about 10% of the total direct radiative forcing from all long-lived greenhouse gases (year 2019); which includes forcing from the much larger concentrations of carbon dioxide and methane. Chlorofluorocarbons also cause stratospheric ozone depletion. International efforts are ongoing under the Montreal Protocol and Kyoto Protocol to control rapid growth in the industrial manufacturing and use of these environmentally potent gases. For some applications more benign alternatives such as hydrofluoroolefins have been developed and are being gradually introduced. Land use changes Since the invention of agriculture, humans have directly and gradually influenced the carbon cycle over century-long timescales by modifying the mixture of vegetation in the terrestrial biosphere. Over the past several centuries, direct and indirect human-caused land use and land cover change (LUCC) has led to the loss of biodiversity, which lowers ecosystems' resilience to environmental stresses and decreases their ability to remove carbon from the atmosphere. More directly, it often leads to the release of carbon from terrestrial ecosystems into the atmosphere. Deforestation for agricultural purposes removes forests, which hold large amounts of carbon, and replaces them, generally with agricultural or urban areas. Both of these replacement land cover types store comparatively small amounts of carbon so that the net result of the transition is that more carbon stays in the atmosphere. However, the effects on the atmosphere and overall carbon cycle can be intentionally and/or naturally reversed with reforestation. See also References External links Carbon Cycle Science Program – an interagency partnership. NOAA's Carbon Cycle Greenhouse Gases Group Global Carbon Project – initiative of the Earth System Science Partnership UNEP – The present carbon cycle – Climate Change carbon levels and flows Chemical oceanography Photosynthesis Soil biology Soil chemistry Carbon cycle Numerical climate and weather models Effects of climate change
0.77334
0.997335
0.771279
Vestigiality
Vestigiality is the retention, during the process of evolution, of genetically determined structures or attributes that have lost some or all of the ancestral function in a given species. Assessment of the vestigiality must generally rely on comparison with homologous features in related species. The emergence of vestigiality occurs by normal evolutionary processes, typically by loss of function of a feature that is no longer subject to positive selection pressures when it loses its value in a changing environment. The feature may be selected against more urgently when its function becomes definitively harmful, but if the lack of the feature provides no advantage, and its presence provides no disadvantage, the feature may not be phased out by natural selection and persist across species. Examples of vestigial structures (also called degenerate, atrophied, or rudimentary organs) are the loss of functional wings in island-dwelling birds; the human vomeronasal organ; and the hindlimbs of the snake and whale. Overview Vestigial features may take various forms; for example, they may be patterns of behavior, anatomical structures, or biochemical processes. Like most other physical features, however functional, vestigial features in a given species may successively appear, develop, and persist or disappear at various stages within the life cycle of the organism, ranging from early embryonic development to late adulthood. Vestigiality, biologically speaking, refers to organisms retaining organs that have seemingly lost their original function. Vestigial organs are common evolutionary knowledge. In addition, the term vestigiality is useful in referring to many genetically determined features, either morphological, behavioral, or physiological; in any such context, however, it need not follow that a vestigial feature must be completely useless. A classic example at the level of gross anatomy is the human vermiform appendix, vestigial in the sense of retaining no significant digestive function. Similar concepts apply at the molecular level—some nucleic acid sequences in eukaryotic genomes have no known biological function; some of them may be "junk DNA", but it is a difficult matter to demonstrate that a particular sequence in a particular region of a given genome is truly nonfunctional. The simple fact that it is noncoding DNA does not establish that it is functionless. Furthermore, even if an extant DNA sequence is functionless, it does not follow that it has descended from an ancestral sequence of functional DNA. Logically such DNA would not be vestigial in the sense of being the vestige of a functional structure. In contrast pseudogenes have lost their protein-coding ability or are otherwise no longer expressed in the cell. Whether they have any extant function or not, they have lost their former function and in that sense, they do fit the definition of vestigiality. Vestigial structures are often called vestigial organs, although many of them are not actually organs. Such vestigial structures typically are degenerate, atrophied, or rudimentary, and tend to be much more variable than homologous non-vestigial parts. Although structures commonly regarded "vestigial" may have lost some or all of the functional roles that they had played in ancestral organisms, such structures may retain lesser functions or may have become adapted to new roles in extant populations. It is important to avoid confusion of the concept of vestigiality with that of exaptation. Both may occur together in the same example, depending on the relevant point of view. In exaptation, a structure originally used for one purpose is modified for a new one. For example, the wings of penguins would be exaptational in the sense of serving a substantial new purpose (underwater locomotion), but might still be regarded as vestigial in the sense of having lost the function of flight. In contrast Darwin argued that the wings of emus would be definitely vestigial, as they appear to have no major extant function; however, function is a matter of degree, so judgments on what is a "major" function are arbitrary; the emu does seem to use its wings as organs of balance in running. Similarly, the ostrich uses its wings in displays and temperature control, though they are undoubtedly vestigial as structures for flight. Vestigial characters range from detrimental through neutral to favorable in terms of selection. Some may be of some limited utility to an organism but still degenerate over time if they do not confer a significant enough advantage in terms of fitness to avoid the effects of genetic drift or competing selective pressures. Vestigiality in its various forms presents many examples of evidence for biological evolution. History Vestigial structures have been noticed since ancient times, and the reason for their existence was long speculated upon before Darwinian evolution provided a widely accepted explanation. In the 4th century BC, Aristotle was one of the earliest writers to comment, in his History of Animals, on the vestigial eyes of moles, calling them "stunted in development" due to the fact that moles can scarcely see. However, only in recent centuries have anatomical vestiges become a subject of serious study. In 1798, Étienne Geoffroy Saint-Hilaire noted on vestigial structures: His colleague, Jean-Baptiste Lamarck, named a number of vestigial structures in his 1809 book Philosophie Zoologique. Lamarck noted "Olivier's Spalax, which lives underground like the mole, and is apparently exposed to daylight even less than the mole, has altogether lost the use of sight: so that it shows nothing more than vestiges of this organ." Charles Darwin was familiar with the concept of vestigial structures, though the term for them did not yet exist. He listed a number of them in The Descent of Man, including the muscles of the ear, wisdom teeth, the appendix, the tail bone, body hair, and the semilunar fold in the corner of the eye. Darwin also noted, in On the Origin of Species, that a vestigial structure could be useless for its primary function, but still retain secondary anatomical roles: "An organ serving for two purposes, may become rudimentary or utterly aborted for one, even the more important purpose, and remain perfectly efficient for the other.... [A]n organ may become rudimentary for its proper purpose, and be used for a distinct object." In the first edition of On the Origin of Species, Darwin briefly mentioned inheritance of acquired characters under the heading "Effects of Use and Disuse", expressing little doubt that use "strengthens and enlarges certain parts, and disuse diminishes them; and that such modifications are inherited". In later editions he expanded his thoughts on this, and in the final chapter of the 6th edition concluded that species have been modified "chiefly through the natural selection of numerous successive, slight, favorable variations; aided in an important manner by the inherited effects of the use and disuse of parts". In 1893, Robert Wiedersheim published The Structure of Man, a book on human anatomy and its relevance to man's evolutionary history. The Structure of Man contained a list of 86 human organs that Wiedersheim described as, "Organs having become wholly or in part functionless, some appearing in the Embryo alone, others present during Life constantly or inconstantly. For the greater part Organs which may be rightly termed Vestigial." Since his time, the function of some of these structures have been discovered, while other anatomical vestiges have been unearthed, making the list primarily of interest as a record of the knowledge of human anatomy at the time. Later versions of Wiedersheim's list were expanded to as many as 180 human "vestigial organs". This is why the zoologist Horatio Newman said in a written statement read into evidence in the Scopes Trial that "There are, according to Wiedersheim, no less than 180 vestigial structures in the human body, sufficient to make of a man a veritable walking museum of antiquities." Common descent and evolutionary theory Vestigial structures are often homologous to structures that are functioning normally in other species. Therefore, vestigial structures can be considered evidence for evolution, the process by which beneficial heritable traits arise in populations over an extended period of time. The existence of vestigial traits can be attributed to changes in the environment and behavior patterns of the organism in question. Through an examination of these various traits, it is clear that evolution had a hard role in the development of organisms. Every anatomical structure or behavior response has origins in which they were, at one time, useful. As time progressed, the ancient common ancestor organisms did as well. Evolving with time, natural selection played a huge role. More advantageous structures were selected, while others were not. With this expansion, some traits were left to the wayside. As the function of the trait is no longer beneficial for survival, the likelihood that future offspring will inherit the "normal" form of it decreases. In some cases, the structure becomes detrimental to the organism (for example the eyes of a mole can become infected). In many cases the structure is of no direct harm, yet all structures require extra energy in terms of development, maintenance, and weight, and are also a risk in terms of disease (e.g., infection, cancer), providing some selective pressure for the removal of parts that do not contribute to an organism's fitness. A structure that is not harmful will take longer to be 'phased out' than one that is. However, some vestigial structures may persist due to limitations in development, such that complete loss of the structure could not occur without major alterations of the organism's developmental pattern, and such alterations would likely produce numerous negative side-effects. The toes of many animals such as horses, which stand on a single toe, are still evident in a vestigial form and may become evident, although rarely, from time to time in individuals. The vestigial versions of the structure can be compared to the original version of the structure in other species in order to determine the homology of a vestigial structure. Homologous structures indicate common ancestry with those organisms that have a functional version of the structure. Douglas Futuyma has stated that vestigial structures make no sense without evolution, just as spelling and usage of many modern English words can only be explained by their Latin or Old Norse antecedents. Vestigial traits can still be considered adaptations. This is because an adaptation is often defined as a trait that has been favored by natural selection. Adaptations, therefore, need not be adaptive, as long as they were at some point. Examples Non-human animals Vestigial characters are present throughout the animal kingdom, and an almost endless list could be given. Darwin said that "it would be impossible to name one of the higher animals in which some part or other is not in a rudimentary condition." The wings of ostriches, emus and other flightless birds are vestigial; they are remnants of their flying ancestors' wings. These birds go through the effort of developing wings, even though most birds are too large to use the wings successfully. Seeing vestigial wings in birds is also common when they no longer need to fly to escape predators, such as birds on the Galapagos Islands. The eyes of certain cavefish and salamanders are vestigial, as they no longer allow the organism to see, and are remnants of their ancestors' functional eyes. Animals that reproduce without sex (via asexual reproduction) generally lose their sexual traits, such as the ability to locate/recognize the opposite sex and copulation behavior. Boas and pythons have vestigial pelvis remnants, which are externally visible as two small pelvic spurs on each side of the cloaca. These spurs are sometimes used in copulation, but are not essential, as no colubrid snake (the vast majority of species) possesses these remnants. Furthermore, in most snakes, the left lung is greatly reduced or absent. Amphisbaenians, which independently evolved limblessness, also retain vestiges of the pelvis as well as the pectoral girdle, and have lost their right lung. A case of vestigial organs was described in polyopisthocotylean Monogeneans (parasitic flatworms). These parasites usually have a posterior attachment organ with several clamps, which are sclerotised organs attaching the worm to the gill of the host fish. These clamps are extremely important for the survival of the parasite. In the family Protomicrocotylidae, species have either normal clamps, simplified clamps, or no clamps at all (in the genus Lethacotyle). After a comparative study of the relative surface of clamps in more than 100 Monogeneans, this has been interpreted as an evolutionary sequence leading to the loss of clamps. Coincidentally, other attachment structures (lateral flaps, transverse striations) have evolved in protomicrocotylids. Therefore, clamps in protomicrocotylids were considered vestigial organs. In the foregoing examples the vestigiality is generally the (sometimes incidental) result of adaptive evolution. However, there are many examples of vestigiality as the product of drastic mutation, and such vestigiality is usually harmful or counter-adaptive. One of the earliest documented examples was that of vestigial wings in Drosophila. Many examples in many other contexts have emerged since. Humans Human vestigiality is related to human evolution, and includes a variety of characters occurring in the human species. Many examples of these are vestigial in other primates and related animals, whereas other examples are still highly developed. The human caecum is vestigial, as often is the case in omnivores, being reduced to a single chamber receiving the content of the ileum into the colon. The ancestral caecum would have been a large, blind diverticulum in which resistant plant material such as cellulose would have been fermented in preparation for absorption in the colon. Analogous organs in other animals similar to humans continue to perform similar functions. The coccyx, or tailbone, though a vestige of the tail of some primate ancestors, is functional as an anchor for certain pelvic muscles including: the levator ani muscle and the largest gluteal muscle, the gluteus maximus. Other structures that are vestigial include the plica semilunaris on the inside corner of the eye (a remnant of the nictitating membrane); and (as seen at right) muscles in the ear. Other organic structures (such as the occipitofrontalis muscle) have lost their original functions (to keep the head from falling) but are still useful for other purposes (facial expression). Humans also bear some vestigial behaviors and reflexes. The formation of goose bumps in humans under stress is a vestigial reflex; its function in human ancestors was to raise the body's hair, making the ancestor appear larger and scaring off predators. The arrector pili (muscle that connects the hair follicle to connective tissue) contracts and creates goosebumps on skin. There are also vestigial molecular structures in humans, which are no longer in use but may indicate common ancestry with other species. One example of this is a gene that is functional in most other mammals and which produces L-gulonolactone oxidase, an enzyme that can make vitamin C. A documented mutation deactivated the gene in an ancestor of the modern infraorder of monkeys, and apes, and it now remains in their genomes, including the human genome, as a vestigial sequence called a pseudogene. The shift in human diet towards soft and processed food over time caused a reduction in the number of powerful grinding teeth, especially the third molars (also known as wisdom teeth), which were highly prone to impaction. Plants and fungi Plants also have vestigial parts, including functionless stipules and carpels, leaf reduction of Equisetum, paraphyses of Fungi. Well known examples are the reductions in floral display, leading to smaller and/or paler flowers, in plants that reproduce without outcrossing, for example via selfing or obligate clonal reproduction. Objects Many objects in daily use contain vestigial structures. While not the result of natural selection through random mutation, much of the process is the same. Product design, like evolution, is iterative; it builds on features and processes that already exist, with limited resources available to make tweaks. To spend resources on completely weeding out a form that serves no purpose (if at the same time it is not an obstruction either) is not economically astute. These vestigial structures differ from the concept of skeuomorphism in that a skeuomorph is a design feature that has been specifically implemented as a reference to the past, enabling users to acclimatise quicker. A vestigial feature does not exist intentionally, or even usefully. For example, men's business suits often contain a row of buttons at the bottom of the sleeve. These used to serve a purpose, allowing the sleeve to be split and rolled up. The feature has been lost entirely, though most suits still give the impression that it is possible, complete with fake button holes. There is also an example of exaptation to be found in the business suit: it was previously possible to button a jacket up all the way to the top. As it became the fashion to fold the lapel over, the top half of buttons and their accompanying buttonholes disappeared, save for a single hole at the top; it has since found a new use as a place to fasten pins, badges, or boutonnières. As a final example, soldiers in ceremonial or parade uniform can sometimes be seen wearing a gorget: a small decorative piece of metal suspended around the neck with a chain. The gorget serves no protection to the wearer, yet there exists an unbroken lineage from the gorget to the full suits of armour of the middle ages. With the introduction of gunpowder weapons, armour increasingly lost its usefulness on the battlefield. At the same time, military men were keen to retain the status it provided them. The result: a breastplate that "shrank" away over time, but never disappeared completely. See also Atavism Dewclaw Exaptation Evolutionary anachronism Human vestigiality Maladaptation Plantaris muscle Recessive refuge Spandrel (biology) Vestigial response References External links Vestigial organs at the TalkOrigins Archive Evolutionary biology concepts
0.774878
0.99533
0.771259
Molecular biology
Molecular biology is a branch of biology that seeks to understand the molecular basis of biological activity in and between cells, including biomolecular synthesis, modification, mechanisms, and interactions. Though cells and other microscopic structures had been observed in living organisms as early as the 18th century, a detailed understanding of the mechanisms and interactions governing their behavior did not emerge until the 20th century, when technologies used in physics and chemistry had advanced sufficiently to permit their application in the biological sciences. The term 'molecular biology' was first used in 1945 by the English physicist William Astbury, who described it as an approach focused on discerning the underpinnings of biological phenomena—i.e. uncovering the physical and chemical structures and properties of biological molecules, as well as their interactions with other molecules and how these interactions explain observations of so-called classical biology, which instead studies biological processes at larger scales and higher levels of organization. In 1953, Francis Crick, James Watson, Rosalind Franklin, and their colleagues at the Medical Research Council Unit, Cavendish Laboratory, were the first to describe the double helix model for the chemical structure of deoxyribonucleic acid (DNA), which is often considered a landmark event for the nascent field because it provided a physico-chemical basis by which to understand the previously nebulous idea of nucleic acids as the primary substance of biological inheritance. They proposed this structure based on previous research done by Franklin, which was conveyed to them by Maurice Wilkins and Max Perutz. Their work led to the discovery of DNA in other microorganisms, plants, and animals. The field of molecular biology includes techniques which enable scientists to learn about molecular processes. These techniques are used to efficiently target new drugs, diagnose disease, and better understand cell physiology. Some clinical research and medical therapies arising from molecular biology are covered under gene therapy, whereas the use of molecular biology or molecular cell biology in medicine is now referred to as molecular medicine. History of molecular biology Molecular biology sits at the intersection of biochemistry and genetics; as these scientific disciplines emerged and evolved in the 20th century, it became clear that they both sought to determine the molecular mechanisms which underlie vital cellular functions. Advances in molecular biology have been closely related to the development of new technologies and their optimization. Molecular biology has been elucidated by the work of many scientists, and thus the history of the field depends on an understanding of these scientists and their experiments. The field of genetics arose from attempts to understand the set of rules underlying reproduction and heredity, and the nature of the hypothetical units of heredity known as genes. Gregor Mendel pioneered this work in 1866, when he first described the laws of inheritance he observed in his studies of mating crosses in pea plants. One such law of genetic inheritance is the law of segregation, which states that diploid individuals with two alleles for a particular gene will pass one of these alleles to their offspring. Because of his critical work, the study of genetic inheritance is commonly referred to as Mendelian genetics. A major milestone in molecular biology was the discovery of the structure of DNA. This work began in 1869 by Friedrich Miescher, a Swiss biochemist who first proposed a structure called nuclein, which we now know to be (deoxyribonucleic acid), or DNA. He discovered this unique substance by studying the components of pus-filled bandages, and noting the unique properties of the "phosphorus-containing substances". Another notable contributor to the DNA model was Phoebus Levene, who proposed the "polynucleotide model" of DNA in 1919 as a result of his biochemical experiments on yeast. In 1950, Erwin Chargaff expanded on the work of Levene and elucidated a few critical properties of nucleic acids: first, the sequence of nucleic acids varies across species. Second, the total concentration of purines (adenine and guanine) is always equal to the total concentration of pyrimidines (cysteine and thymine). This is now known as Chargaff's rule. In 1953, James Watson and Francis Crick published the double helical structure of DNA, based on the X-ray crystallography work done by Rosalind Franklin which was conveyed to them by Maurice Wilkins and Max Perutz. Watson and Crick described the structure of DNA and conjectured about the implications of this unique structure for possible mechanisms of DNA replication. Watson and Crick were awarded the Nobel Prize in Physiology or Medicine in 1962, along with Wilkins, for proposing a model of the structure of DNA. In 1961, it was demonstrated that when a gene encodes a protein, three sequential bases of a gene's DNA specify each successive amino acid of the protein. Thus the genetic code is a triplet code, where each triplet (called a codon) specifies a particular amino acid. Furthermore, it was shown that the codons do not overlap with each other in the DNA sequence encoding a protein, and that each sequence is read from a fixed starting point. During 1962–1964, through the use of conditional lethal mutants of a bacterial virus, fundamental advances were made in our understanding of the functions and interactions of the proteins employed in the machinery of DNA replication, DNA repair, DNA recombination, and in the assembly of molecular structures. Griffith's experiment In 1928, Frederick Griffith, encountered a virulence property in pneumococcus bacteria, which was killing lab rats. According to Mendel, prevalent at that time, gene transfer could occur only from parent to daughter cells. Griffith advanced another theory, stating that gene transfer occurring in member of same generation is known as horizontal gene transfer (HGT). This phenomenon is now referred to as genetic transformation. Griffith's experiment addressed the pneumococcus bacteria, which had two different strains, one virulent and smooth and one avirulent and rough. The smooth strain had glistering appearance owing to the presence of a type of specific polysaccharide – a polymer of glucose and glucuronic acid capsule. Due to this polysaccharide layer of bacteria, a host's immune system cannot recognize the bacteria and it kills the host. The other, avirulent, rough strain lacks this polysaccharide capsule and has a dull, rough appearance. Presence or absence of capsule in the strain, is known to be genetically determined. Smooth and rough strains occur in several different type such as S-I, S-II, S-III, etc. and R-I, R-II, R-III, etc. respectively. All this subtypes of S and R bacteria differ with each other in antigen type they produce. Avery–MacLeod–McCarty experiment The Avery–MacLeod–McCarty experiment was a landmark study conducted in 1944 that demonstrated that DNA, not protein as previously thought, carries genetic information in bacteria. Oswald Avery, Colin Munro MacLeod, and Maclyn McCarty used an extract from a strain of pneumococcus that could cause pneumonia in mice. They showed that genetic transformation in the bacteria could be accomplished by injecting them with purified DNA from the extract. They discovered that when they digested the DNA in the extract with DNase, transformation of harmless bacteria into virulent ones was lost. This provided strong evidence that DNA was the genetic material, challenging the prevailing belief that proteins were responsible. It laid the basis for the subsequent discovery of its structure by Watson and Crick. Hershey–Chase experiment Confirmation that DNA is the genetic material which is cause of infection came from the Hershey–Chase experiment. They used E.coli and bacteriophage for the experiment. This experiment is also known as blender experiment, as kitchen blender was used as a major piece of apparatus. Alfred Hershey and Martha Chase demonstrated that the DNA injected by a phage particle into a bacterium contains all information required to synthesize progeny phage particles. They used radioactivity to tag the bacteriophage's protein coat with radioactive sulphur and DNA with radioactive phosphorus, into two different test tubes respectively. After mixing bacteriophage and E.coli into the test tube, the incubation period starts in which phage transforms the genetic material in the E.coli cells. Then the mixture is blended or agitated, which separates the phage from E.coli cells. The whole mixture is centrifuged and the pellet which contains E.coli cells was checked and the supernatant was discarded. The E.coli cells showed radioactive phosphorus, which indicated that the transformed material was DNA not the protein coat. The transformed DNA gets attached to the DNA of E.coli and radioactivity is only seen onto the bacteriophage's DNA. This mutated DNA can be passed to the next generation and the theory of Transduction came into existence. Transduction is a process in which the bacterial DNA carry the fragment of bacteriophages and pass it on the next generation. This is also a type of horizontal gene transfer. Meselson–Stahl experiment The Meselson-Stahl experiment was a landmark experiment in molecular biology that provided evidence for the semiconservative replication of DNA. Conducted in 1958 by Matthew Meselson and Franklin Stahl, the experiment involved growing E. coli bacteria in a medium containing heavy isotope of nitrogen (15N) for several generations. This caused all the newly synthesized bacterial DNA to be incorporated with the heavy isotope. After allowing the bacteria to replicate in a medium containing normal nitrogen (14N), samples were taken at various time points. These samples were then subjected to centrifugation in a density gradient, which separated the DNA molecules based on their density. The results showed that after one generation of replication in the 14N medium, the DNA formed a band of intermediate density between that of pure 15N DNA and pure 14N DNA. This supported the semiconservative DNA replication proposed by Watson and Crick, where each strand of the parental DNA molecule serves as a template for the synthesis of a new complementary strand, resulting in two daughter DNA molecules, each consisting of one parental and one newly synthesized strand. The Meselson-Stahl experiment provided compelling evidence for the semiconservative replication of DNA, which is fundamental to the understanding of genetics and molecular biology. Modern molecular biology In the early 2020s, molecular biology entered a golden age defined by both vertical and horizontal technical development. Vertically, novel technologies are allowing for real-time monitoring of biological processes at the atomic level. Molecular biologists today have access to increasingly affordable sequencing data at increasingly higher depths, facilitating the development of novel genetic manipulation methods in new non-model organisms. Likewise, synthetic molecular biologists will drive the industrial production of small and macro molecules through the introduction of exogenous metabolic pathways in various prokaryotic and eukaryotic cell lines. Horizontally, sequencing data is becoming more affordable and used in many different scientific fields. This will drive the development of industries in developing nations and increase accessibility to individual researchers. Likewise, CRISPR-Cas9 gene editing experiments can now be conceived and implemented by individuals for under $10,000 in novel organisms, which will drive the development of industrial and medical applications. Relationship to other biological sciences The following list describes a viewpoint on the interdisciplinary relationships between molecular biology and other related fields. Molecular biology is the study of the molecular underpinnings of the biological phenomena, focusing on molecular synthesis, modification, mechanisms and interactions. Biochemistry is the study of the chemical substances and vital processes occurring in living organisms. Biochemists focus heavily on the role, function, and structure of biomolecules such as proteins, lipids, carbohydrates and nucleic acids. Genetics is the study of how genetic differences affect organisms. Genetics attempts to predict how mutations, individual genes and genetic interactions can affect the expression of a phenotype While researchers practice techniques specific to molecular biology, it is common to combine these with methods from genetics and biochemistry. Much of molecular biology is quantitative, and recently a significant amount of work has been done using computer science techniques such as bioinformatics and computational biology. Molecular genetics, the study of gene structure and function, has been among the most prominent sub-fields of molecular biology since the early 2000s. Other branches of biology are informed by molecular biology, by either directly studying the interactions of molecules in their own right such as in cell biology and developmental biology, or indirectly, where molecular techniques are used to infer historical attributes of populations or species, as in fields in evolutionary biology such as population genetics and phylogenetics. There is also a long tradition of studying biomolecules "from the ground up", or molecularly, in biophysics. Techniques of molecular biology Molecular cloning Molecular cloning is used to isolate and then transfer a DNA sequence of interest into a plasmid vector. This recombinant DNA technology was first developed in the 1960s. In this technique, a DNA sequence coding for a protein of interest is cloned using polymerase chain reaction (PCR), and/or restriction enzymes, into a plasmid (expression vector). The plasmid vector usually has at least 3 distinctive features: an origin of replication, a multiple cloning site (MCS), and a selective marker (usually antibiotic resistance). Additionally, upstream of the MCS are the promoter regions and the transcription start site, which regulate the expression of cloned gene. This plasmid can be inserted into either bacterial or animal cells. Introducing DNA into bacterial cells can be done by transformation via uptake of naked DNA, conjugation via cell-cell contact or by transduction via viral vector. Introducing DNA into eukaryotic cells, such as animal cells, by physical or chemical means is called transfection. Several different transfection techniques are available, such as calcium phosphate transfection, electroporation, microinjection and liposome transfection. The plasmid may be integrated into the genome, resulting in a stable transfection, or may remain independent of the genome and expressed temporarily, called a transient transfection. DNA coding for a protein of interest is now inside a cell, and the protein can now be expressed. A variety of systems, such as inducible promoters and specific cell-signaling factors, are available to help express the protein of interest at high levels. Large quantities of a protein can then be extracted from the bacterial or eukaryotic cell. The protein can be tested for enzymatic activity under a variety of situations, the protein may be crystallized so its tertiary structure can be studied, or, in the pharmaceutical industry, the activity of new drugs against the protein can be studied. Polymerase chain reaction Polymerase chain reaction (PCR) is an extremely versatile technique for copying DNA. In brief, PCR allows a specific DNA sequence to be copied or modified in predetermined ways. The reaction is extremely powerful and under perfect conditions could amplify one DNA molecule to become 1.07 billion molecules in less than two hours. PCR has many applications, including the study of gene expression, the detection of pathogenic microorganisms, the detection of genetic mutations, and the introduction of mutations to DNA. The PCR technique can be used to introduce restriction enzyme sites to ends of DNA molecules, or to mutate particular bases of DNA, the latter is a method referred to as site-directed mutagenesis. PCR can also be used to determine whether a particular DNA fragment is found in a cDNA library. PCR has many variations, like reverse transcription PCR (RT-PCR) for amplification of RNA, and, more recently, quantitative PCR which allow for quantitative measurement of DNA or RNA molecules. Gel electrophoresis Gel electrophoresis is a technique which separates molecules by their size using an agarose or polyacrylamide gel. This technique is one of the principal tools of molecular biology. The basic principle is that DNA fragments can be separated by applying an electric current across the gel - because the DNA backbone contains negatively charged phosphate groups, the DNA will migrate through the agarose gel towards the positive end of the current. Proteins can also be separated on the basis of size using an SDS-PAGE gel, or on the basis of size and their electric charge by using what is known as a 2D gel electrophoresis. The Bradford protein assay The Bradford assay is a molecular biology technique which enables the fast, accurate quantitation of protein molecules utilizing the unique properties of a dye called Coomassie Brilliant Blue G-250. Coomassie Blue undergoes a visible color shift from reddish-brown to bright blue upon binding to protein. In its unstable, cationic state, Coomassie Blue has a background wavelength of 465 nm and gives off a reddish-brown color. When Coomassie Blue binds to protein in an acidic solution, the background wavelength shifts to 595 nm and the dye gives off a bright blue color. Proteins in the assay bind Coomassie blue in about 2 minutes, and the protein-dye complex is stable for about an hour, although it is recommended that absorbance readings are taken within 5 to 20 minutes of reaction initiation. The concentration of protein in the Bradford assay can then be measured using a visible light spectrophotometer, and therefore does not require extensive equipment. This method was developed in 1975 by Marion M. Bradford, and has enabled significantly faster, more accurate protein quantitation compared to previous methods: the Lowry procedure and the biuret assay. Unlike the previous methods, the Bradford assay is not susceptible to interference by several non-protein molecules, including ethanol, sodium chloride, and magnesium chloride. However, it is susceptible to influence by strong alkaline buffering agents, such as sodium dodecyl sulfate (SDS). Macromolecule blotting and probing The terms northern, western and eastern blotting are derived from what initially was a molecular biology joke that played on the term Southern blotting, after the technique described by Edwin Southern for the hybridisation of blotted DNA. Patricia Thomas, developer of the RNA blot which then became known as the northern blot, actually did not use the term. Southern blotting Named after its inventor, biologist Edwin Southern, the Southern blot is a method for probing for the presence of a specific DNA sequence within a DNA sample. DNA samples before or after restriction enzyme (restriction endonuclease) digestion are separated by gel electrophoresis and then transferred to a membrane by blotting via capillary action. The membrane is then exposed to a labeled DNA probe that has a complement base sequence to the sequence on the DNA of interest. Southern blotting is less commonly used in laboratory science due to the capacity of other techniques, such as PCR, to detect specific DNA sequences from DNA samples. These blots are still used for some applications, however, such as measuring transgene copy number in transgenic mice or in the engineering of gene knockout embryonic stem cell lines. Northern blotting The northern blot is used to study the presence of specific RNA molecules as relative comparison among a set of different samples of RNA. It is essentially a combination of denaturing RNA gel electrophoresis, and a blot. In this process RNA is separated based on size and is then transferred to a membrane that is then probed with a labeled complement of a sequence of interest. The results may be visualized through a variety of ways depending on the label used; however, most result in the revelation of bands representing the sizes of the RNA detected in sample. The intensity of these bands is related to the amount of the target RNA in the samples analyzed. The procedure is commonly used to study when and how much gene expression is occurring by measuring how much of that RNA is present in different samples, assuming that no post-transcriptional regulation occurs and that the levels of mRNA reflect proportional levels of the corresponding protein being produced. It is one of the most basic tools for determining at what time, and under what conditions, certain genes are expressed in living tissues. Western blotting A western blot is a technique by which specific proteins can be detected from a mixture of proteins. Western blots can be used to determine the size of isolated proteins, as well as to quantify their expression. In western blotting, proteins are first separated by size, in a thin gel sandwiched between two glass plates in a technique known as SDS-PAGE. The proteins in the gel are then transferred to a polyvinylidene fluoride (PVDF), nitrocellulose, nylon, or other support membrane. This membrane can then be probed with solutions of antibodies. Antibodies that specifically bind to the protein of interest can then be visualized by a variety of techniques, including colored products, chemiluminescence, or autoradiography. Often, the antibodies are labeled with enzymes. When a chemiluminescent substrate is exposed to the enzyme it allows detection. Using western blotting techniques allows not only detection but also quantitative analysis. Analogous methods to western blotting can be used to directly stain specific proteins in live cells or tissue sections. Eastern blotting The eastern blotting technique is used to detect post-translational modification of proteins. Proteins blotted on to the PVDF or nitrocellulose membrane are probed for modifications using specific substrates. Microarrays A DNA microarray is a collection of spots attached to a solid support such as a microscope slide where each spot contains one or more single-stranded DNA oligonucleotide fragments. Arrays make it possible to put down large quantities of very small (100 micrometre diameter) spots on a single slide. Each spot has a DNA fragment molecule that is complementary to a single DNA sequence. A variation of this technique allows the gene expression of an organism at a particular stage in development to be qualified (expression profiling). In this technique the RNA in a tissue is isolated and converted to labeled complementary DNA (cDNA). This cDNA is then hybridized to the fragments on the array and visualization of the hybridization can be done. Since multiple arrays can be made with exactly the same position of fragments, they are particularly useful for comparing the gene expression of two different tissues, such as a healthy and cancerous tissue. Also, one can measure what genes are expressed and how that expression changes with time or with other factors. There are many different ways to fabricate microarrays; the most common are silicon chips, microscope slides with spots of ~100 micrometre diameter, custom arrays, and arrays with larger spots on porous membranes (macroarrays). There can be anywhere from 100 spots to more than 10,000 on a given array. Arrays can also be made with molecules other than DNA. Allele-specific oligonucleotide Allele-specific oligonucleotide (ASO) is a technique that allows detection of single base mutations without the need for PCR or gel electrophoresis. Short (20–25 nucleotides in length), labeled probes are exposed to the non-fragmented target DNA, hybridization occurs with high specificity due to the short length of the probes and even a single base change will hinder hybridization. The target DNA is then washed and the unhybridized probes are removed. The target DNA is then analyzed for the presence of the probe via radioactivity or fluorescence. In this experiment, as in most molecular biology techniques, a control must be used to ensure successful experimentation. In molecular biology, procedures and technologies are continually being developed and older technologies abandoned. For example, before the advent of DNA gel electrophoresis (agarose or polyacrylamide), the size of DNA molecules was typically determined by rate sedimentation in sucrose gradients, a slow and labor-intensive technique requiring expensive instrumentation; prior to sucrose gradients, viscometry was used. Aside from their historical interest, it is often worth knowing about older technology, as it is occasionally useful to solve another new problem for which the newer technique is inappropriate. See also References Further reading External links Applied geometry
0.773315
0.997283
0.771214
Morphogenesis
Morphogenesis (from the Greek morphê shape and genesis creation, literally "the generation of form") is the biological process that causes a cell, tissue or organism to develop its shape. It is one of three fundamental aspects of developmental biology along with the control of tissue growth and patterning of cellular differentiation. The process controls the organized spatial distribution of cells during the embryonic development of an organism. Morphogenesis can take place also in a mature organism, such as in the normal maintenance of tissue by stem cells or in regeneration of tissues after damage. Cancer is an example of highly abnormal and pathological tissue morphogenesis. Morphogenesis also describes the development of unicellular life forms that do not have an embryonic stage in their life cycle. Morphogenesis is essential for the evolution of new forms. Morphogenesis is a mechanical process involving forces that generate mechanical stress, strain, and movement of cells, and can be induced by genetic programs according to the spatial patterning of cells within tissues. Abnormal morphogenesis is called dysmorphogenesis. History Some of the earliest ideas and mathematical descriptions on how physical processes and constraints affect biological growth, and hence natural patterns such as the spirals of phyllotaxis, were written by D'Arcy Wentworth Thompson in his 1917 book On Growth and Form and Alan Turing in his The Chemical Basis of Morphogenesis (1952). Where Thompson explained animal body shapes as being created by varying rates of growth in different directions, for instance to create the spiral shell of a snail, Turing correctly predicted a mechanism of morphogenesis, the diffusion of two different chemical signals, one activating and one deactivating growth, to set up patterns of development, decades before the formation of such patterns was observed. The fuller understanding of the mechanisms involved in actual organisms required the discovery of the structure of DNA in 1953, and the development of molecular biology and biochemistry. Genetic and molecular basis Several types of molecules are important in morphogenesis. Morphogens are soluble molecules that can diffuse and carry signals that control cell differentiation via concentration gradients. Morphogens typically act through binding to specific protein receptors. An important class of molecules involved in morphogenesis are transcription factor proteins that determine the fate of cells by interacting with DNA. These can be coded for by master regulatory genes, and either activate or deactivate the transcription of other genes; in turn, these secondary gene products can regulate the expression of still other genes in a regulatory cascade of gene regulatory networks. At the end of this cascade are classes of molecules that control cellular behaviors such as cell migration, or, more generally, their properties, such as cell adhesion or cell contractility. For example, during gastrulation, clumps of stem cells switch off their cell-to-cell adhesion, become migratory, and take up new positions within an embryo where they again activate specific cell adhesion proteins and form new tissues and organs. Developmental signaling pathways implicated in morphogenesis include Wnt, Hedgehog, and ephrins. Cellular basis At a tissue level, ignoring the means of control, morphogenesis arises because of cellular proliferation and motility. Morphogenesis also involves changes in the cellular structure or how cells interact in tissues. These changes can result in tissue elongation, thinning, folding, invasion or separation of one tissue into distinct layers. The latter case is often referred as cell sorting. Cell "sorting out" consists of cells moving so as to sort into clusters that maximize contact between cells of the same type. The ability of cells to do this has been proposed to arise from differential cell adhesion by Malcolm Steinberg through his differential adhesion hypothesis. Tissue separation can also occur via more dramatic cellular differentiation events during which epithelial cells become mesenchymal (see Epithelial–mesenchymal transition). Mesenchymal cells typically leave the epithelial tissue as a consequence of changes in cell adhesive and contractile properties. Following epithelial-mesenchymal transition, cells can migrate away from an epithelium and then associate with other similar cells in a new location. In plants, cellular morphogenesis is tightly linked to the chemical composition and the mechanical properties of the cell wall. Cell-to-cell adhesion During embryonic development, cells are restricted to different layers due to differential affinities. One of the ways this can occur is when cells share the same cell-to-cell adhesion molecules. For instance, homotypic cell adhesion can maintain boundaries between groups of cells that have different adhesion molecules. Furthermore, cells can sort based upon differences in adhesion between the cells, so even two populations of cells with different levels of the same adhesion molecule can sort out. In cell culture cells that have the strongest adhesion move to the center of a mixed aggregates of cells. Moreover, cell-cell adhesion is often modulated by cell contractility, which can exert forces on the cell-cell contacts so that two cell populations with equal levels of the same adhesion molecule can sort out. The molecules responsible for adhesion are called cell adhesion molecules (CAMs). Several types of cell adhesion molecules are known and one major class of these molecules are cadherins. There are dozens of different cadherins that are expressed on different cell types. Cadherins bind to other cadherins in a like-to-like manner: E-cadherin (found on many epithelial cells) binds preferentially to other E-cadherin molecules. Mesenchymal cells usually express other cadherin types such as N-cadherin. Extracellular matrix The extracellular matrix (ECM) is involved in keeping tissues separated, providing structural support or providing a structure for cells to migrate on. Collagen, laminin, and fibronectin are major ECM molecules that are secreted and assembled into sheets, fibers, and gels. Multisubunit transmembrane receptors called integrins are used to bind to the ECM. Integrins bind extracellularly to fibronectin, laminin, or other ECM components, and intracellularly to microfilament-binding proteins α-actinin and talin to link the cytoskeleton with the outside. Integrins also serve as receptors to trigger signal transduction cascades when binding to the ECM. A well-studied example of morphogenesis that involves ECM is mammary gland ductal branching. Cell contractility Tissues can change their shape and separate into distinct layers via cell contractility. Just as in muscle cells, myosin can contract different parts of the cytoplasm to change its shape or structure. Myosin-driven contractility in embryonic tissue morphogenesis is seen during the separation of germ layers in the model organisms Caenorhabditis elegans, Drosophila and zebrafish. There are often periodic pulses of contraction in embryonic morphogenesis. A model called the cell state splitter involves alternating cell contraction and expansion, initiated by a bistable organelle at the apical end of each cell. The organelle consists of microtubules and microfilaments in mechanical opposition. It responds to local mechanical perturbations caused by morphogenetic movements. These then trigger traveling embryonic differentiation waves of contraction or expansion over presumptive tissues that determine cell type and is followed by cell differentiation. The cell state splitter was first proposed to explain neural plate morphogenesis during gastrulation of the axolotl and the model was later generalized to all of morphogenesis. Branching morphogenesis In the development of the lung a bronchus branches into bronchioles forming the respiratory tree. The branching is a result of the tip of each bronchiolar tube bifurcating, and the process of branching morphogenesis forms the bronchi, bronchioles, and ultimately the alveoli. Branching morphogenesis is also evident in the ductal formation of the mammary gland. Primitive duct formation begins in development, but the branching formation of the duct system begins later in response to estrogen during puberty and is further refined in line with mammary gland development. Cancer morphogenesis Cancer can result from disruption of normal morphogenesis, including both tumor formation and tumor metastasis. Mitochondrial dysfunction can result in increased cancer risk due to disturbed morphogen signaling. Virus morphogenesis During assembly of the bacteriophage (phage) T4 virion, the morphogenetic proteins encoded by the phage genes interact with each other in a characteristic sequence. Maintaining an appropriate balance in the amounts of each of these proteins produced during viral infection appears to be critical for normal phage T4 morphogenesis. Phage T4 encoded proteins that determine virion structure include major structural components, minor structural components and non-structural proteins that catalyze specific steps in the morphogenesis sequence. Phage T4 morphogenesis is divided into three independent pathways: the head, the tail and the long tail fibres as detailed by Yap and Rossman. Computer models An approach to model morphogenesis in computer science or mathematics can be traced to Alan Turing's 1952 paper, "The chemical basis of morphogenesis", a model now known as the Turing pattern. Another famous model is the so-called French flag model, developed in the sixties. Improvements in computer performance in the twenty-first century enabled the simulation of relatively complex morphogenesis models. In 2020, such a model was proposed where cell growth and differentiation is that of a cellular automaton with parametrized rules. As the rules' parameters are differentiable, they can be trained with gradient descent, a technique which has been highly optimized in recent years due to its use in machine learning. This model was limited to the generation of pictures, and is thus bi-dimensional. A similar model to the one described above was subsequently extended to generate three-dimensional structures, and was demonstrated in the video game Minecraft, whose block-based nature made it particularly expedient for the simulation of 3D cellular automatons. See also Bone morphogenetic protein Collective cell migration Embryonic development Pattern formation Reaction–diffusion system Neurulation Gastrulation Axon guidance Eye development Polycystic kidney disease 2 Drosophila embryogenesis Cytoplasmic determinant Madin-Darby Canine Kidney cells Bioelectricity#Role in pattern regulation Notes References Further reading External links Artificial Life model of multicellular morphogenesis with autonomously generated gradients for positional information Turing's theory of morphogenesis validated Developmental biology Morphology (biology) Evolutionary developmental biology
0.776777
0.992782
0.771171
Synthetic biology
Synthetic biology (SynBio) is a multidisciplinary field of science that focuses on living systems and organisms, and it applies engineering principles to develop new biological parts, devices, and systems or to redesign existing systems found in nature. It is a branch of science that encompasses a broad range of methodologies from various disciplines, such as biochemistry, biotechnology, biomaterials, material science/engineering, genetic engineering, molecular biology, molecular engineering, systems biology, membrane science, biophysics, chemical and biological engineering, electrical and computer engineering, control engineering and evolutionary biology. It includes designing and constructing biological modules, biological systems, and biological machines, or re-designing existing biological systems for useful purposes. Additionally, it is the branch of science that focuses on the new abilities of engineering into existing organisms to redesign them for useful purposes. In order to produce predictable and robust systems with novel functionalities that do not already exist in nature, it is also necessary to apply the engineering paradigm of systems design to biological systems. According to the European Commission, this possibly involves a molecular assembler based on biomolecular systems such as the ribosome. History 1910: First identifiable use of the term synthetic biology in Stéphane Leduc's publication Théorie physico-chimique de la vie et générations spontanées. He also noted this term in another publication, La Biologie Synthétique in 1912. 1944: Canadian-American scientist Oswald Avery shows that DNA is the material of which genes and chromosomes are made. This becomes the bedrock on which all subsequent genetic research is built. 1953: Francis Crick and James Watson publish the structure of the DNA in Nature. 1961: Jacob and Monod postulate cellular regulation by molecular networks from their study of the lac operon in E. coli and envisioned the ability to assemble new systems from molecular components. 1973: First molecular cloning and amplification of DNA in a plasmid is published in P.N.A.S. by Cohen, Boyer et al. constituting the dawn of synthetic biology. 1978: Arber, Nathans and Smith win the Nobel Prize in Physiology or Medicine for the discovery of restriction enzymes, leading Szybalski to offer an editorial comment in the journal Gene: 1988: First DNA amplification by the polymerase chain reaction (PCR) using a thermostable DNA polymerase is published in Science by Mullis et al. This obviated adding new DNA polymerase after each PCR cycle, thus greatly simplifying DNA mutagenesis and assembly. 2000: Two papers in Nature report synthetic biological circuits, a genetic toggle switch and a biological clock, by combining genes within E. coli cells. 2003: The most widely used standardized DNA parts, BioBrick plasmids, are invented by Tom Knight. These parts will become central to the International Genetically Engineered Machine (iGEM) competition founded at MIT in the following year. 2003: Researchers engineer an artemisinin precursor pathway in E. coli. 2004: First international conference for synthetic biology, Synthetic Biology 1.0 (SB1.0) is held at MIT. 2005: Researchers develop a light-sensing circuit in E. coli. Another group designs circuits capable of multicellular pattern formation. 2006: Researchers engineer a synthetic circuit that promotes bacterial invasion of tumour cells. 2010: Researchers publish in Science the first synthetic bacterial genome, called M. mycoides JCVI-syn1.0. The genome is made from chemically-synthesized DNA using yeast recombination. 2011: Functional synthetic chromosome arms are engineered in yeast. 2012: Charpentier and Doudna labs publish in Science the programming of CRISPR-Cas9 bacterial immunity for targeting DNA cleavage. This technology greatly simplified and expanded eukaryotic gene editing. 2019: Scientists at ETH Zurich report the creation of the first bacterial genome, named Caulobacter ethensis-2.0, made entirely by a computer, although a related viable form of C. ethensis-2.0 does not yet exist. 2019: Researchers report the production of a new synthetic (possibly artificial) form of viable life, a variant of the bacteria Escherichia coli, by reducing the natural number of 64 codons in the bacterial genome to 59 codons instead, in order to encode 20 amino acids. 2020: Scientists created the first xenobot, a programmable synthetic organism derived from frog cells and designed by AI. 2021: Scientists reported that xenobots are able to self-replicate by gathering loose cells in the environment and then forming new xenobots. Perspectives It is a field whose scope is expanding in terms of systems integration, engineered organisms, and practical findings. Engineers view biology as technology (in other words, a given system includes biotechnology or its biological engineering). Synthetic biology includes the broad redefinition and expansion of biotechnology, with the ultimate goal of being able to design and build engineered live biological systems that process information, manipulate chemicals, fabricate materials and structures, produce energy, provide food, and maintain and enhance human health, as well as advance fundamental knowledge of biological systems and our environment. Researchers and companies working in synthetic biology are using nature's power to solve issues in agriculture, manufacturing, and medicine. Due to more powerful genetic engineering capabilities and decreased DNA synthesis and sequencing costs, the field of synthetic biology is rapidly growing. In 2016, more than 350 companies across 40 countries were actively engaged in synthetic biology applications; all these companies had an estimated net worth of $3.9 billion in the global market. Synthetic biology currently has no generally accepted definition. Here are a few examples: It is the science of emerging genetic and physical engineering to produce new (and, therefore, synthetic) life forms. To develop organisms with novel or enhanced characteristics, this emerging field of study combines biology, engineering, and related disciplines' knowledge and techniques to design chemically synthesised DNA. Biomolecular engineering includes approaches that aim to create a toolkit of functional units that can be introduced to present new technological functions in living cells. Genetic engineering includes approaches to construct synthetic chromosomes or minimal organisms like Mycoplasma laboratorium. Biomolecular design refers to the general idea of de novo design and additive combination of biomolecular components. Each of these approaches shares a similar task: to develop a more synthetic entity at a higher level of complexity by inventively manipulating a simpler part at the preceding level. Optimizing these exogenous pathways in unnatural systems takes iterative fine-tuning of the individual biomolecular components to select the highest concentrations of the desired product. On the other hand, "re-writers" are synthetic biologists interested in testing the irreducibility of biological systems. Due to the complexity of natural biological systems, it would be simpler to rebuild the natural systems of interest from the ground up; to provide engineered surrogates that are easier to comprehend, control and manipulate. Re-writers draw inspiration from refactoring, a process sometimes used to improve computer software. Categories Bioengineering, synthetic genomics, protocell synthetic biology, unconventional molecular biology, and in silico techniques are the five categories of synthetic biology. It is necessary to review the distinctions and analogies between the categories of synthetic biology for its social and ethical assessment, to distinguish between issues affecting the whole field and particular to a specific one. Bioengineering The subfield of bioengineering concentrates on creating novel metabolic and regulatory pathways, and is currently the one that likely draws the attention of most researchers and funding. It is primarily motivated by the desire to establish biotechnology as a legitimate engineering discipline. When referring to this area of synthetic biology, the word "bioengineering" should not be confused with "traditional genetic engineering", which involves introducing a single transgene into the intended organism. Bioengineers adapted synthetic biology to provide a substantially more integrated perspective on how to alter organisms or metabolic systems. A typical example of single-gene genetic engineering is the insertion of the human insulin gene into bacteria to create transgenic proteins. The creation of whole new signalling pathways, containing numerous genes and regulatory components (such as an oscillator circuit to initiate the periodic production of green fluorescent protein (GFP) in mammalian cells), is known as bioengineering as part of synthetic biology. By utilising simplified and abstracted metabolic and regulatory modules as well as other standardized parts that may be freely combined to create new pathways or creatures, bioengineering aims to create innovative biological systems. In addition to creating infinite opportunities for novel applications, this strategy is anticipated to make bioengineering more predictable and controllable than traditional biotechnology. Synthetic genomics The formation of animals with a chemically manufactured (minimal) genome is another facet of synthetic biology that is highlighted by synthetic genomics. This area of synthetic biology has been made possible by ongoing advancements in DNA synthesis technology, which now makes it feasible to produce DNA molecules with thousands of base pairs at a reasonable cost. The goal is to combine these molecules into complete genomes and transplant them into living cells, replacing the host cell's genome and reprogramming its metabolism to perform different functions. Scientists have previously demonstrated the potential of this approach by creating infectious viruses by synthesising the genomes of multiple viruses. These significant advances in science and technology triggered the initial public concerns concerning the risks associated with this technology. A simple genome might also work as a "chassis genome" that could be enlarged quickly by gene inclusion created for particular tasks. Such "chassis creatures" would be more suited for the insertion of new functions than wild organisms since they would have fewer biological pathways that could potentially conflict with the new functionalities in addition to having specific insertion sites. Synthetic genomics strives to create creatures with novel "architectures," much like the bioengineering method. It adopts an integrative or holistic perspective of the organism. In this case, the objective is the creation of chassis genomes based on necessary genes and other required DNA sequences rather than the design of metabolic or regulatory pathways based on abstract criteria. Protocell synthetic biology The in vitro generation of synthetic cells is the protocell branch of synthetic biology. Lipid vesicles, which have all the necessary components to function as a complete system, can be used to create these artificial cells. In the end, these synthetic cells should meet the requirements for being deemed alive, namely the capacity for self-replication, self-maintenance, and evolution. The protocell technique has this as its end aim, however there are other intermediary steps that fall short of meeting all the criteria for a living cell. In order to carry out a specific function, these lipid vesicles contain cell extracts or more specific sets of biological macromolecules and complex structures, such as enzymes, nucleic acids, or ribosomes. For instance, liposomes may carry out particular polymerase chain reactions or synthesise a particular protein. Protocell synthetic biology takes artificial life one step closer to reality by eventually synthesizing not only the genome but also every component of the cell in vitro, as opposed to the synthetic genomics approach, which relies on coercing a natural cell to carry out the instructions encoded by the introduced synthetic genome. Synthetic biologists in this field view their work as basic study into the conditions necessary for life to exist and its origin more than in any of the other techniques. The protocell technique, however, also lends itself well to applications; similar to other synthetic biology byproducts, protocells could be employed for the manufacture of biopolymers and medicines. Unconventional molecular biology The objective of the "unnatural molecular biology" strategy is to create new varieties of life that are based on a different kind of molecular biology, such as new types of nucleic acids or a new genetic code. The creation of new types of nucleotides that can be built into unique nucleic acids could be accomplished by changing certain DNA or RNA constituents, such as the bases or the backbone sugars. The normal genetic code is being altered by inserting quadruplet codons or changing some codons to encode new amino acids, which would subsequently permit the use of non-natural amino acids with unique features in protein production. It is a scientific and technological problem to adjust the enzymatic machinery of the cell for both approaches. A new sort of life would be formed by organisms with a genome built on synthetic nucleic acids or on a totally new coding system for synthetic amino acids. This new style of life would have some benefits but also some new dangers. On release into the environment, there would be no horizontal gene transfer or outcrossing of genes with natural species. Furthermore, these kinds of synthetic organisms might be created to require non-natural materials for protein or nucleic acid synthesis, rendering them unable to thrive in the wild if they accidentally escaped. On the other hand, if these organisms ultimately were able to survive outside of controlled space, they might have a particular benefit over natural organisms because they would be resistant to predatory living organisms or natural viruses, that could lead to an unmanaged spread of the synthetic organisms. In silico technique Synthetic biology in silico and the various strategies are interconnected. The development of complex designs, whether they are metabolic pathways, fundamental cellular processes, or chassis genomes, is one of the major difficulties faced by the four synthetic-biology methods outlined above. Because of this, synthetic biology has a robust in silico branch, similar to systems biology, that aims to create computational models for the design of common biological components or synthetic circuits, which are essentially simulations of synthetic organisms. The practical application of simulations and models through bioengineering or other fields of synthetic biology is the long-term goal of in silico synthetic biology. Many of the computational simulations of synthetic organisms up to this point possess little to no direct analogy to living things. Due to this, in silico synthetic biology is regarded as a separate group in this article. It is sensible to integrate the five areas under the umbrella of synthetic biology as an unified area of study. Even though they focus on various facets of life, such as metabolic regulation, essential elements, or biochemical makeup, these five strategies all work toward the same end: creating new types of living organisms. Additionally, the varied methodologies begin with numerous methodological approaches, which leads to the diversity of synthetic biology approaches. Synthetic biology is an interdisciplinary field that draws from and is inspired by many different scientific disciplines, not one single field or technique. Synthetic biologists all have the same underlying objective of designing and producing new forms of life, despite the fact that they may employ various methodologies, techniques, and research instruments. Any evaluation of synthetic biology, whether it examines ethical, legal, or safety considerations, must take into account the fact that while some questions, risks, and issues are unique to each technique, in other circumstances, synthetic biology as a whole must be taken into consideration. Four engineering approaches Synthetic biology has traditionally been divided into four different engineering approaches: top down, parallel, orthogonal and bottom up. To replicate emergent behaviours from natural biology and build artificial life, unnatural chemicals are used. The other looks for interchangeable components from biological systems to put together and create systems that do not work naturally. In either case, a synthetic objective compels researchers to venture into new area in order to engage and resolve issues that cannot be readily resolved by analysis. Due to this, new paradigms are driven to arise in ways that analysis cannot easily do. In addition to equipments that oscillate, creep, and play tic-tac-toe, synthetic biology has produced diagnostic instruments that enhance the treatment of patients with infectious diseases. Top-down approach It involves using metabolic and genetic engineering techniques to impart new functions to living cells. By comparing universal genes and eliminating non-essential ones to create a basic genome, this method seeks to lessen the complexity of existing cells. These initiatives are founded on the hypothesis of a single genesis for cellular life, the so-called Last Universal Common Ancestor, which supports the presence of a universal minimal genome that gave rise to all living things. Recent studies, however, raise the possibility that the eukaryotic and prokaryotic cells that make up the tree of life may have evolved from a group of primordial cells rather than from a single cell. As a result, even while the Holy Grail-like pursuit of the "minimum genome" has grown elusive, cutting out a number of non-essential functions impairs an organism's fitness and leads to "fragile" genomes. Bottom-up approach This approach involves creating new biological systems in vitro by bringing together 'non-living' biomolecular components, often with the aim of constructing an artificial cell. Reproduction, replication, and assembly are three crucial self-organizational principles that are taken into account in order to accomplish this. Cells, which are made up of a container and a metabolism, are considered "hardware" in the definition of reproduction, whereas replication occurs when a system duplicates a perfect copy of itself, as in the case of DNA, which is considered "software." When vesicles or containers (such as Oparin's coacervates) formed of tiny droplets of molecules that are organic like lipids or liposomes, membrane-like structures comprising phospholipids, aggregate, assembly occur. The study of protocells exists along with other in vitro synthetic biology initiatives that seek to produce minimum cells, metabolic pathways, or "never-born proteins" as well as to mimic physiological functions including cell division and growth. The in vitro enhancement of synthetic pathways does have the potential to have an effect on some other synthetic biology sectors, including metabolic engineering, despite the fact that it no longer classified as synthetic biology research. This research, which is primarily essential, deserves proper recognition as synthetic biology research. Parallel approach Parallel engineering is also known as bioengineering. The basic genetic code is the foundation for parallel engineering research, which uses conventional biomolecules like nucleic acids and the 20 amino acids to construct biological systems. For a variety of applications in biocomputing, bioenergy, biofuels, bioremediation, optogenetics, and medicine, it involves the standardisation of DNA components, engineering of switches, biosensors, genetic circuits, logic gates, and cellular communication operators. For directing the expression of two or more genes and/or proteins, the majority of these applications often rely on the use of one or more vectors (or plasmids). Small, circular, double-strand DNA units known as plasmids, which are primarily found in prokaryotic but can also occasionally be detected in eukaryotic cells, may replicate autonomously of chromosomal DNA. Orthogonal approach It is also known as perpendicular engineering. This strategy, also referred to as "chemical synthetic biology," principally seeks to alter or enlarge the genetic codes of living systems utilising artificial DNA bases and/or amino acids. This subfield is also connected to xenobiology, a newly developed field that combines systems chemistry, synthetic biology, exobiology, and research into the origins of life. In recent decades, researchers have created compounds that are structurally similar to the DNA canonical bases to see if those "alien" or xeno (XNA) molecules may be employed as genetic information carriers. Similar to this, noncanonical moieties have taken the place of the DNA sugar (deoxyribose). In order to express information other than the 20 conventional amino acids of proteins, the genetic code can be altered or enlarged. One method involves incorporating a specified unnatural, noncanonical, or xeno amino acid (XAA) into one or more proteins at one or more precise places using orthogonal enzymes and a transfer RNA adaptor from an other organism. By using "directed evolution," which entails repeated cycles of gene mutagenesis (genotypic diversity production), screening or selection (of a specific phenotypic trait), and amplification of a better variant for the following iterative round, orthogonal enzymes are produced Numerous XAAs have been effectively incorporated into proteins in more complex creatures like worms and flies as well as in bacteria, yeast, and human cell lines. As a result of canonical DNA sequence changes, directed evolution also enables the development of orthogonal ribosomes, which make it easier to incorporate XAAs into proteins or create "mirror life," or biological systems that contain biomolecules made up of enantiomers with different chiral orientations. Enabling technologies Several novel enabling technologies were critical to the success of synthetic biology. Concepts include standardization of biological parts and hierarchical abstraction to permit using those parts in synthetic systems. DNA serves as the guide for how biological processes should function, like the score to a complex symphony of life. Our ability to comprehend and design biological systems has undergone significant modifications as a result of developments in the previous few decades in both reading (sequencing) and writing (synthesis) DNA sequences. These developments have produced ground-breaking techniques for designing, assembling, and modifying DNA-encoded genes, materials, circuits, and metabolic pathways, enabling an ever-increasing amount of control over biological systems and even entire organisms. Basic technologies include reading and writing DNA (sequencing and fabrication). Measurements under multiple conditions are needed for accurate modeling and computer-aided design (CAD). DNA and gene synthesis Driven by dramatic decreases in costs of oligonucleotide ("oligos") synthesis and the advent of PCR, the sizes of DNA constructions from oligos have increased to the genomic level. In 2000, researchers reported synthesis of the 9.6 kbp (kilo bp) Hepatitis C virus genome from chemically synthesized 60 to 80-mers. In 2002, researchers at Stony Brook University succeeded in synthesizing the 7741 bp poliovirus genome from its published sequence, producing the second synthetic genome, spanning two years. In 2003, the 5386 bp genome of the bacteriophage Phi X 174 was assembled in about two weeks. In 2006, the same team, at the J. Craig Venter Institute, constructed and patented a synthetic genome of a novel minimal bacterium, Mycoplasma laboratorium and were working on getting it functioning in a living cell. In 2007, it was reported that several companies were offering synthesis of genetic sequences up to 2000 base pairs (bp) long, for a price of about $1 per bp and a turnaround time of less than two weeks. Oligonucleotides harvested from a photolithographic- or inkjet-manufactured DNA chip combined with PCR and DNA mismatch error-correction allows inexpensive large-scale changes of codons in genetic systems to improve gene expression or incorporate novel amino-acids (see George M. Church's and Anthony Forster's synthetic cell projects.). This favors a synthesis-from-scratch approach. Additionally, the CRISPR/Cas system has emerged as a promising technique for gene editing. It was described as "the most important innovation in the synthetic biology space in nearly 30 years". While other methods take months or years to edit gene sequences, CRISPR speeds that time up to weeks. Due to its ease of use and accessibility, however, it has raised ethical concerns, especially surrounding its use in biohacking. Sequencing DNA sequencing determines the order of nucleotide bases in a DNA molecule. Synthetic biologists use DNA sequencing in their work in several ways. First, large-scale genome sequencing efforts continue to provide information on naturally occurring organisms. This information provides a rich substrate from which synthetic biologists can construct parts and devices. Second, sequencing can verify that the fabricated system is as intended. Third, fast, cheap, and reliable sequencing can facilitate rapid detection and identification of synthetic systems and organisms. Modularity This is the ability of a system or component to operate without reference to its context. The most used standardized DNA parts are BioBrick plasmids, invented by Tom Knight in 2003. Biobricks are stored at the Registry of Standard Biological Parts in Cambridge, Massachusetts. The BioBrick standard has been used by tens of thousands of students worldwide in the international Genetically Engineered Machine (iGEM) competition. BioBrick Assembly Standard 10 promotes modularity by allowing BioBrick coding sequences to be spliced out and exchanged using restriction enzymes EcoRI or XbaI (BioBrick prefix) and SpeI and PstI (BioBrick suffix). Sequence overlap between two genetic elements (genes or coding sequences), called overlapping genes, can prevent their individual manipulation. To increase genome modularity, the practice of genome refactoring or improving "the internal structure of an existing system for future use, while simultaneously maintaining external system function" has been adopted across synthetic biology disciplines. Some notable examples of refactoring including the nitrogen fixation cluster and type III secretion system along with bacteriophages T7 and ΦX174. While DNA is most important for information storage, a large fraction of the cell's activities are carried out by proteins. Tools can send proteins to specific regions of the cell and to link different proteins together. The interaction strength between protein partners should be tunable between a lifetime of seconds (desirable for dynamic signaling events) up to an irreversible interaction (desirable for device stability or resilient to harsh conditions). Interactions such as coiled coils, SH3 domain-peptide binding or SpyTag/SpyCatcher offer such control. In addition, it is necessary to regulate protein-protein interactions in cells, such as with light (using light-oxygen-voltage-sensing domains) or cell-permeable small molecules by chemically induced dimerization. In a living cell, molecular motifs are embedded in a bigger network with upstream and downstream components. These components may alter the signaling capability of the modeling module. In the case of ultrasensitive modules, the sensitivity contribution of a module can differ from the sensitivity that the module sustains in isolation. Modeling Models inform the design of engineered biological systems by better predicting system behavior prior to fabrication. Synthetic biology benefits from better models of how biological molecules bind substrates and catalyze reactions, how DNA encodes the information needed to specify the cell and how multi-component integrated systems behave. Multiscale models of gene regulatory networks focus on synthetic biology applications. Simulations can model all biomolecular interactions in transcription, translation, regulation and induction of gene regulatory networks. Only extensive modelling can enable the exploration of dynamic gene expression in a form suitable for research and design due to the numerous involved species and the intricacy of their relationships. Dynamic simulations of the entire biomolecular interconnection involved in regulation, transport, transcription, induction, and translation enable the molecular level detailing of designs. As opposed to modelling artificial networks a posteriori, this is contrasted. Microfluidics Microfluidics, in particular droplet microfluidics, is an emerging tool used to construct new components, and to analyze and characterize them. It is widely employed in screening assays. Synthetic transcription factors Studies have considered the components of the DNA transcription mechanism. One desire of scientists creating synthetic biological circuits is to be able to control the transcription of synthetic DNA in unicellular organisms (prokaryotes) and in multicellular organisms (eukaryotes). One study tested the adjustability of synthetic transcription factors (sTFs) in areas of transcription output and cooperative ability among multiple transcription factor complexes. Researchers were able to mutate functional regions called zinc fingers, the DNA specific component of sTFs, to decrease their affinity for specific operator DNA sequence sites, and thus decrease the associated site-specific activity of the sTF (usually transcriptional regulation). They further used the zinc fingers as components of complex-forming sTFs, which are the eukaryotic translation mechanisms. Applications Synthetic biology initiatives frequently aim to redesign organisms so that they can create a material, such as a drug or fuel, or acquire a new function, such as the ability to sense something in the environment. Examples of what researchers are creating using synthetic biology include: Utilizing microorganisms for bioremediation to remove contaminants from our water, soil, and air. Production of complex natural products that are usually extracted from plants but cannot be obtained in sufficient amounts, e.g. drugs of natural origin, such as artemisinin and paclitaxel. Beta-carotene, a substance typically associated with carrots that prevents vitamin A deficiency, is produced by rice that has been modified. Every year, between 250,000 and 500,000 children lose their vision due to vitamin A deficiency, which also significantly raises their chance of dying from infectious infections. As a sustainable and environmentally benign alternative to the fresh roses that perfumers use to create expensive smells, yeast has been created to produce rose oil. Biosensors A biosensor refers to an engineered organism, usually a bacterium, that is capable of reporting some ambient phenomenon such as the presence of heavy metals or toxins. One such system is the Lux operon of Aliivibrio fischeri, which codes for the enzyme that is the source of bacterial bioluminescence, and can be placed after a respondent promoter to express the luminescence genes in response to a specific environmental stimulus. One such sensor created, consisted of a bioluminescent bacterial coating on a photosensitive computer chip to detect certain petroleum pollutants. When the bacteria sense the pollutant, they luminesce. Another example of a similar mechanism is the detection of landmines by an engineered E.coli reporter strain capable of detecting TNT and its main degradation product DNT, and consequently producing a green fluorescent protein (GFP). Modified organisms can sense environmental signals and send output signals that can be detected and serve diagnostic purposes. Microbe cohorts have been used. Biosensors could also be used to detect pathogenic signatures—such as of SARS-CoV-2—and can be wearable. For the purpose of detecting and reacting to various and temporary environmental factors, cells have developed a wide range of regulatory circuits, ranging from transcriptional to post-translational. These circuits are made up of transducer modules that filter the signals and activate a biological response, as well as carefully designed sensitive sections that attach analytes and regulate signal-detection thresholds. Modularity and selectivity are programmed to biosensor circuits at the transcriptional, translational, and post-translational levels, to achieve the delicate balancing of the two basic sensing modules. Food and drink However, not all synthetic nutrition products are animal food products – for instance, as of 2021, there are also products of synthetic coffee that are reported to be close to commercialization. Similar fields of research and production based on synthetic biology that can be used for the production of food and drink are: Genetically engineered microbial food cultures (e.g. for solar-energy-based protein powder) Cell-free artificial synthesis (e.g. synthetic starch; ) Materials Photosynthetic microbial cells have been used as a step to synthetic production of spider silk. Biological computers A biological computer refers to an engineered biological system that can perform computer-like operations, which is a dominant paradigm in synthetic biology. Researchers built and characterized a variety of logic gates in a number of organisms, and demonstrated both analog and digital computation in living cells. They demonstrated that bacteria can be engineered to perform both analog and/or digital computation. In 2007, in human cells, research demonstrated a universal logic evaluator that operates in mammalian cells. Subsequently, researchers utilized this paradigm to demonstrate a proof-of-concept therapy that uses biological digital computation to detect and kill human cancer cells in 2011. In 2016, another group of researchers demonstrated that principles of computer engineering can be used to automate digital circuit design in bacterial cells. In 2017, researchers demonstrated the 'Boolean logic and arithmetic through DNA excision' (BLADE) system to engineer digital computation in human cells. In 2019, researchers implemented a perceptron in biological systems opening the way for machine learning in these systems. Cell transformation Cells use interacting genes and proteins, which are called gene circuits, to implement diverse function, such as responding to environmental signals, decision making and communication. Three key components are involved: DNA, RNA and Synthetic biologist designed gene circuits that can control gene expression from several levels including transcriptional, post-transcriptional and translational levels. Traditional metabolic engineering has been bolstered by the introduction of combinations of foreign genes and optimization by directed evolution. This includes engineering E. coli and yeast for commercial production of a precursor of the antimalarial drug, Artemisinin. Entire organisms have yet to be created from scratch, although living cells can be transformed with new DNA. Several ways allow constructing synthetic DNA components and even entire synthetic genomes, but once the desired genetic code is obtained, it is integrated into a living cell that is expected to manifest the desired new capabilities or phenotypes while growing and thriving. Cell transformation is used to create biological circuits, which can be manipulated to yield desired outputs. By integrating synthetic biology with materials science, it would be possible to use cells as microscopic molecular foundries to produce materials whose properties were genetically encoded. Re-engineering has produced Curli fibers, the amyloid component of extracellular material of biofilms, as a platform for programmable nanomaterial. These nanofibers were genetically constructed for specific functions, including adhesion to substrates, nanoparticle templating and protein immobilization. Designed proteins Natural proteins can be engineered, for example, by directed evolution, novel protein structures that match or improve on the functionality of existing proteins can be produced. One group generated a helix bundle that was capable of binding oxygen with similar properties as hemoglobin, yet did not bind carbon monoxide. A similar protein structure was generated to support a variety of oxidoreductase activities while another formed a structurally and sequentially novel ATPase. Another group generated a family of G-protein coupled receptors that could be activated by the inert small molecule clozapine N-oxide but insensitive to the native ligand, acetylcholine; these receptors are known as DREADDs. Novel functionalities or protein specificity can also be engineered using computational approaches. One study was able to use two different computational methods: a bioinformatics and molecular modeling method to mine sequence databases, and a computational enzyme design method to reprogram enzyme specificity. Both methods resulted in designed enzymes with greater than 100 fold specificity for production of longer chain alcohols from sugar. Another common investigation is expansion of the natural set of 20 amino acids. Excluding stop codons, 61 codons have been identified, but only 20 amino acids are coded generally in all organisms. Certain codons are engineered to code for alternative amino acids including: nonstandard amino acids such as O-methyl tyrosine; or exogenous amino acids such as 4-fluorophenylalanine. Typically, these projects make use of re-coded nonsense suppressor tRNA-Aminoacyl tRNA synthetase pairs from other organisms, though in most cases substantial engineering is required. Other researchers investigated protein structure and function by reducing the normal set of 20 amino acids. Limited protein sequence libraries are made by generating proteins where groups of amino acids may be replaced by a single amino acid. For instance, several non-polar amino acids within a protein can all be replaced with a single non-polar amino acid. One project demonstrated that an engineered version of Chorismate mutase still had catalytic activity when only nine amino acids were used. Researchers and companies practice synthetic biology to synthesize industrial enzymes with high activity, optimal yields and effectiveness. These synthesized enzymes aim to improve products such as detergents and lactose-free dairy products, as well as make them more cost effective. The improvements of metabolic engineering by synthetic biology is an example of a biotechnological technique utilized in industry to discover pharmaceuticals and fermentive chemicals. Synthetic biology may investigate modular pathway systems in biochemical production and increase yields of metabolic production. Artificial enzymatic activity and subsequent effects on metabolic reaction rates and yields may develop "efficient new strategies for improving cellular properties ... for industrially important biochemical production". Designed nucleic acid systems Scientists can encode digital information onto a single strand of synthetic DNA. In 2012, George M. Church encoded one of his books about synthetic biology in DNA. The 5.3 Mb of data was more than 1000 times greater than the previous largest amount of information to be stored in synthesized DNA. A similar project encoded the complete sonnets of William Shakespeare in DNA. More generally, algorithms such as NUPACK, ViennaRNA, Ribosome Binding Site Calculator, Cello, and Non-Repetitive Parts Calculator enables the design of new genetic systems. Many technologies have been developed for incorporating unnatural nucleotides and amino acids into nucleic acids and proteins, both in vitro and in vivo. For example, in May 2014, researchers announced that they had successfully introduced two new artificial nucleotides into bacterial DNA. By including individual artificial nucleotides in the culture media, they were able to exchange the bacteria 24 times; they did not generate mRNA or proteins able to use the artificial nucleotides. Space exploration Synthetic biology raised NASA's interest as it could help to produce resources for astronauts from a restricted portfolio of compounds sent from Earth. On Mars, in particular, synthetic biology could lead to production processes based on local resources, making it a powerful tool in the development of occupied outposts with less dependence on Earth. Work has gone into developing plant strains that are able to cope with the harsh Martian environment, using similar techniques to those employed to increase resilience to certain environmental factors in agricultural crops. Synthetic life One important topic in synthetic biology is synthetic life, that is concerned with hypothetical organisms created in vitro from biomolecules and/or chemical analogues thereof. Synthetic life experiments attempt to either probe the origins of life, study some of the properties of life, or more ambitiously to recreate life from non-living (abiotic) components. Synthetic life biology attempts to create living organisms capable of carrying out important functions, from manufacturing pharmaceuticals to detoxifying polluted land and water. In medicine, it offers prospects of using designer biological parts as a starting point for new classes of therapies and diagnostic tools. A living "artificial cell" has been defined as a completely synthetic cell that can capture energy, maintain ion gradients, contain macromolecules as well as store information and have the ability to mutate. Nobody has been able to create such a cell. A completely synthetic bacterial chromosome was produced in 2010 by Craig Venter, and his team introduced it to genomically emptied bacterial host cells. The host cells were able to grow and replicate. The Mycoplasma laboratorium is the only living organism with completely engineered genome. The first living organism with 'artificial' expanded DNA code was presented in 2014; the team used E. coli that had its genome extracted and replaced with a chromosome with an expanded genetic code. The nucleosides added are d5SICS and dNaM. In May 2019, in a milestone effort, researchers reported the creation of a new synthetic (possibly artificial) form of viable life, a variant of the bacteria Escherichia coli, by reducing the natural number of 64 codons in the bacterial genome to 59 codons instead, in order to encode 20 amino acids. In 2017, the international Build-a-Cell large-scale open-source research collaboration for the construction of synthetic living cells was started, followed by national synthetic cell organizations in several countries, including FabriCell, MaxSynBio and BaSyC. The European synthetic cell efforts were unified in 2019 as SynCellEU initiative. In 2023, researchers were able to create the first synthetically made human embryos derived from stem cells. Drug delivery platforms In therapeutics, synthetic biology has achieved significant advancements in altering and simplifying the therapeutics scope in a relatively short period of time. In fact, new therapeutic platforms, from the discovery of disease mechanisms and drug targets to the manufacture and transport of small molecules, are made possible by the logical and model-guided design construction of biological components. Synthetic biology devices have been designed to act as therapies in therapeutic treatment. It is possible to control complete created viruses and organisms to target particular pathogens and diseased pathways. Thus, in two independent studies 91,92, researchers utilised genetically modified bacteriophages to fight antibiotic-resistant bacteria by giving them genetic features that specifically target and hinder bacterial defences against antibiotic activity. In the therapy of cancer, since conventional medicines frequently indiscriminately target tumours and normal tissues, artificially created viruses and organisms that can identify and connect their therapeutic action to pathological signals may be helpful. For example, p53 pathway activity in human cells was put into adenoviruses to control how they replicated. Engineered bacteria-based platform Bacteria have long been used in cancer treatment. Bifidobacterium and Clostridium selectively colonize tumors and reduce their size. Recently synthetic biologists reprogrammed bacteria to sense and respond to a particular cancer state. Most often bacteria are used to deliver a therapeutic molecule directly to the tumor to minimize off-target effects. To target the tumor cells, peptides that can specifically recognize a tumor were expressed on the surfaces of bacteria. Peptides used include an affibody molecule that specifically targets human epidermal growth factor receptor 2 and a synthetic adhesin. The other way is to allow bacteria to sense the tumor microenvironment, for example hypoxia, by building an AND logic gate into bacteria. Then the bacteria only release target therapeutic molecules to the tumor through either lysis or the bacterial secretion system. Lysis has the advantage that it can stimulate the immune system and control growth. Multiple types of secretion systems can be used and other strategies as well. The system is inducible by external signals. Inducers include chemicals, electromagnetic or light waves. Multiple species and strains are applied in these therapeutics. Most commonly used bacteria are Salmonella typhimurium, Escherichia coli, Bifidobacteria, Streptococcus, Lactobacillus, Listeria and Bacillus subtilis. Each of these species have their own property and are unique to cancer therapy in terms of tissue colonization, interaction with immune system and ease of application. Engineered yeast-based platform Synthetic biologists are developing genetically modified live yeast that can deliver therapeutic biologic medicines. When orally delivered, these live yeast act like micro-factories and will make therapeutic molecules directly in the gastrointestinal tract. Because yeast are eukaryotic, a key benefit is that they can be administered together with antibiotics. Probiotic yeast expressing human P2Y2 purinergic receptor suppressed intestinal inflammation in mouse models of inflammatory bowel disease. A live S. boulardii yeast delivering a tetra-specific anti-toxin that potently neutralizes Toxin A and Toxin B of Clostridioides difficile has been developed. This therapeutic anti-toxin is a fusion of four single-domain antibodies (nanobodies) that potently and broadly neutralize the two major virulence factors of C. difficile at the site of infection in preclinical models. The first in human clinical trial of engineered live yeast for the treatment of Clostridium difficile infection is anticipated in 2024 and will be sponsored by the developer Fzata, Inc. Cell-based platform The immune system plays an important role in cancer and can be harnessed to attack cancer cells. Cell-based therapies focus on immunotherapies, mostly by engineering T cells. T cell receptors were engineered and 'trained' to detect cancer epitopes. Chimeric antigen receptors (CARs) are composed of a fragment of an antibody fused to intracellular T cell signaling domains that can activate and trigger proliferation of the cell. Multiple second generation CAR-based therapies have been approved by FDA. Gene switches were designed to enhance safety of the treatment. Kill switches were developed to terminate the therapy should the patient show severe side effects. Mechanisms can more finely control the system and stop and reactivate it. Since the number of T-cells are important for therapy persistence and severity, growth of T-cells is also controlled to dial the effectiveness and safety of therapeutics. Although several mechanisms can improve safety and control, limitations include the difficulty of inducing large DNA circuits into the cells and risks associated with introducing foreign components, especially proteins, into cells. Biofuels, pharmaceuticals and biomaterials The most popular biofuel is ethanol produced from corn or sugar cane, but this method of producing biofuels is troublesome and constrained due to the high agricultural cost and inadequate fuel characteristics of ethanol. An substitute and potential source of renewable energy is microbes that have had their metabolic pathways altered to be more efficient at converting biomass into biofuels. Only if their production costs could be made to match or even exceed those of present fuel production can these techniques be expected to be successful. Related to this, there are several medicines whose pricey manufacturing procedures prevent them from having a larger therapeutic range. The creation of new materials and the microbiological manufacturing of biomaterials would both benefit substantially from novel artificial biology tools. CRISPR/Cas9 The clustered frequently interspaced short palindromic repetitions (CRISPR)/CRISPR associated (Cas) system is a powerful method of genome engineering in a range of organisms because of its simplicity, modularity, and scalability. In this technique, a guide RNA (gRNA) attracts the CRISPR nuclease Cas9 to a particular spot in the genome, causing a double strand break. Several DNA repair processes, including homology-directed recombination and non-homology end joining, can be used to accomplish the desired genome change (i.e., gene deletion or insertion). Additionally, dCas9 (dead Cas9 or nuclease-deficient Cas9), a Cas9 double mutant (H840A, D10A), has been utilised to control gene expression in bacteria or when linked to a stimulation of suppression site in yeast. Regulatory elements To build and develop biological systems, regulating components including regulators, ribosome-binding sites (RBSs), and terminators are crucial. Despite years of study, there are many various varieties and numbers of promoters and terminators for Escherichia coli, but also for the well-researched model organism Saccharomyces cerevisiae, as well as for other organisms of interest, these tools are quite scarce. Numerous techniques have been invented for the finding and identification of promoters and terminators in order to overcome this constraint, including genome mining, random mutagenesis, hybrid engineering, biophysical modelling, combinatorial design, and rational design. Organoids Synthetic biology has been used for organoids, which are lab-grown organs with application to medical research and transplantation. Bioprinted organs Other transplants and induced regeneration There is ongoing research and development into synthetic biology based methods for inducing regeneration in humans as well the creation of transplantable artificial organs. Nanoparticles, artificial cells and micro-droplets Synthetic biology can be used for creating nanoparticles which can be used for drug-delivery as well as for other purposes. Complementing research and development seeks to and has created synthetic cells that mimics functions of biological cells. Applications include medicine such as designer-nanoparticles that make blood cells eat away—from the inside out—portions of atherosclerotic plaque that cause heart attacks. Synthetic micro-droplets for algal cells or synergistic algal-bacterial multicellular spheroid microbial reactors, for example, could be used to produce hydrogen as hydrogen economy biotechnology. Electrogenetics Mammalian designer cells are engineered by humans to behave a specific way, such as an immune cell that expresses a synthetic receptor designed to combat a specific disease. Electrogenetics is an application of synthetic biology that involves utilizing electrical fields to stimulate a response in engineered cells. Controlling the designer cells can be done with relative ease through the use of common electronic devices, such as smartphones. Additionally, electrogenetics allows for the possibility of creating devices that are much smaller and compact than devices that use other stimulus through the use of microscopic electrodes. One example of how electrogenetics is used to benefit public health is through stimulating designer cells that are able to produce/deliver therapeutics. This was implemented in ElectroHEK cells, cells that contain voltage-gated calcium channels that are electrosensitive, meaning that the ion channel can be controlled by electrical conduction between electrodes and the ElectroHEK cells. The expression levels of the artificial gene that these ElectroHEK cells contained was shown to be able to be controlled by changing the voltage or electrical pulse length. Further studies have expanded on this robust system, one of which is a beta cell line system designed to control the release of insulin based on electric signals. Ethics The creation of new life and the tampering of existing life has raised ethical concerns in the field of synthetic biology and are actively being discussed. Common ethical questions include: Is it morally right to tamper with nature? Is one playing God when creating new life? What happens if a synthetic organism accidentally escapes? What if an individual misuses synthetic biology and creates a harmful entity (e.g., a biological weapon)? Who will have control of and access to the products of synthetic biology? Who will gain from these innovations? Investors? Medical patients? Industrial farmers? Does the patent system allow patents on living organisms? What about parts of organisms, like HIV resistance genes in humans? What if a new creation is deserving of moral or legal status? The ethical aspects of synthetic biology has three main features: biosafety, biosecurity, and the creation of new life forms. Other ethical issues mentioned include the regulation of new creations, patent management of new creations, benefit distribution, and research integrity. Ethical issues have surfaced for recombinant DNA and genetically modified organism (GMO) technologies and extensive regulations of genetic engineering and pathogen research were in place in many jurisdictions. Amy Gutmann, former head of the Presidential Bioethics Commission, argued that we should avoid the temptation to over-regulate synthetic biology in general, and genetic engineering in particular. According to Gutmann, "Regulatory parsimony is especially important in emerging technologies...where the temptation to stifle innovation on the basis of uncertainty and fear of the unknown is particularly great. The blunt instruments of statutory and regulatory restraint may not only inhibit the distribution of new benefits, but can be counterproductive to security and safety by preventing researchers from developing effective safeguards.". The "creation" of life One ethical question is whether or not it is acceptable to create new life forms, sometimes known as "playing God". Currently, the creation of new life forms not present in nature is at small-scale, the potential benefits and dangers remain unknown, and careful consideration and oversight are ensured for most studies. Many advocates express the great potential value—to agriculture, medicine, and academic knowledge, among other fields—of creating artificial life forms. Creation of new entities could expand scientific knowledge well beyond what is currently known from studying natural phenomena. Yet there is concern that artificial life forms may reduce nature's "purity" (i.e., nature could be somehow corrupted by human intervention and manipulation) and potentially influence the adoption of more engineering-like principles instead of biodiversity- and nature-focused ideals. Some are also concerned that if an artificial life form were to be released into nature, it could hamper biodiversity by beating out natural species for resources (similar to how algal blooms kill marine species). Another concern involves the ethical treatment of newly created entities if they happen to sense pain, sentience, and self-perception. There is an ongoing debate as to whether such life forms should be granted moral or legal rights, though no consensus exists as to how these rights would be administered or enforced. Ethical support for synthetic biology Ethics and moral rationales that support certain applications of synthetic biology include their potential mitigation of substantial global problems of detrimental environmental impacts of conventional agriculture (including meat production), animal welfare, food security, and human health, as well as potential reduction of human labor needs and, via therapies of diseases, reduction of human suffering and prolonged life. Biosafety and biocontainment What is most ethically appropriate when considering biosafety measures? How can accidental introduction of synthetic life in the natural environment be avoided? Much ethical consideration and critical thought has been given to these questions. Biosafety not only refers to biological containment; it also refers to strides taken to protect the public from potentially hazardous biological agents. Even though such concerns are important and remain unanswered, not all products of synthetic biology present concern for biological safety or negative consequences for the environment. It is argued that most synthetic technologies are benign and are incapable of flourishing in the outside world due to their "unnatural" characteristics as there is yet to be an example of a transgenic microbe conferred with a fitness advantage in the wild. In general, existing hazard controls, risk assessment methodologies, and regulations developed for traditional genetically modified organisms (GMOs) are considered to be sufficient for synthetic organisms. "Extrinsic" biocontainment methods in a laboratory context include physical containment through biosafety cabinets and gloveboxes, as well as personal protective equipment. In an agricultural context, they include isolation distances and pollen barriers, similar to methods for biocontainment of GMOs. Synthetic organisms may offer increased hazard control because they can be engineered with "intrinsic" biocontainment methods that limit their growth in an uncontained environment, or prevent horizontal gene transfer to natural organisms. Examples of intrinsic biocontainment include auxotrophy, biological kill switches, inability of the organism to replicate or to pass modified or synthetic genes to offspring, and the use of xenobiological organisms using alternative biochemistry, for example using artificial xeno nucleic acids (XNA) instead of DNA. Regarding auxotrophy, bacteria and yeast can be engineered to be unable to produce histidine, an important amino acid for all life. Such organisms can thus only be grown on histidine-rich media in laboratory conditions, nullifying fears that they could spread into undesirable areas. Biosecurity and bioterrorism Some ethical issues relate to biosecurity, where biosynthetic technologies could be deliberately used to cause harm to society and/or the environment. Since synthetic biology raises ethical issues and biosecurity issues, humanity must consider and plan on how to deal with potentially harmful creations, and what kinds of ethical measures could possibly be employed to deter nefarious biosynthetic technologies. With the exception of regulating synthetic biology and biotechnology companies, however, the issues are not seen as new because they were raised during the earlier recombinant DNA and genetically modified organism (GMO) debates, and extensive regulations of genetic engineering and pathogen research are already in place in many jurisdictions. Additionally, the development of synthetic biology tools has made it easier for individuals with less education, training, and access to equipment to modify and use pathogenic organisms as bioweapons. This increases the threat of bioterrorism, especially as terrorist groups become aware of the significant social, economic, and political disruption caused by pandemics like COVID-19. As new techniques are developed in the field of synthetic biology, the risk of bioterrorism is likely to continue to grow. Juan Zarate, who served as Deputy National Security Advisor for Combating Terrorism from 2005 to 2009, noted that "the severity and extreme disruption of a novel coronavirus will likely spur the imagination of the most creative and dangerous groups and individuals to reconsider bioterrorist attacks." European Union The European Union-funded project SYNBIOSAFE has issued reports on how to manage synthetic biology. A 2007 paper identified key issues in safety, security, ethics, and the science-society interface, which the project defined as public education and ongoing dialogue among scientists, businesses, government and ethicists. The key security issues that SYNBIOSAFE identified involved engaging companies that sell synthetic DNA and the biohacking community of amateur biologists. Key ethical issues concerned the creation of new life forms. A subsequent report focused on biosecurity, especially the so-called dual-use challenge. For example, while synthetic biology may lead to more efficient production of medical treatments, it may also lead to synthesis or modification of harmful pathogens (e.g., smallpox). The biohacking community remains a source of special concern, as the distributed and diffuse nature of open-source biotechnology makes it difficult to track, regulate or mitigate potential concerns over biosafety and biosecurity. COSY, another European initiative, focuses on public perception and communication. To better communicate synthetic biology and its societal ramifications to a broader public, COSY and SYNBIOSAFE published SYNBIOSAFE, a 38-minute documentary film, in October 2009. The International Association Synthetic Biology has proposed self-regulation. This proposes specific measures that the synthetic biology industry, especially DNA synthesis companies, should implement. In 2007, a group led by scientists from leading DNA-synthesis companies published a "practical plan for developing an effective oversight framework for the DNA-synthesis industry". United States In January 2009, the Alfred P. Sloan Foundation funded the Woodrow Wilson Center, the Hastings Center, and the J. Craig Venter Institute to examine the public perception, ethics and policy implications of synthetic biology. On July 9–10, 2009, the National Academies' Committee of Science, Technology & Law convened a symposium on "Opportunities and Challenges in the Emerging Field of Synthetic Biology". After the publication of the first synthetic genome and the accompanying media coverage about "life" being created, President Barack Obama established the Presidential Commission for the Study of Bioethical Issues to study synthetic biology. The commission convened a series of meetings, and issued a report in December 2010 titled "New Directions: The Ethics of Synthetic Biology and Emerging Technologies." The commission stated that "while Venter's achievement marked a significant technical advance in demonstrating that a relatively large genome could be accurately synthesized and substituted for another, it did not amount to the "creation of life". It noted that synthetic biology is an emerging field, which creates potential risks and rewards. The commission did not recommend policy or oversight changes and called for continued funding of the research and new funding for monitoring, study of emerging ethical issues and public education. Synthetic biology, as a major tool for biological advances, results in the "potential for developing biological weapons, possible unforeseen negative impacts on human health ... and any potential environmental impact". The proliferation of such technology could also make the production of biological and chemical weapons available to a wider array of state and non-state actors. These security issues may be avoided by regulating industry uses of biotechnology through policy legislation. Federal guidelines on genetic manipulation are being proposed by "the President's Bioethics Commission ... in response to the announced creation of a self-replicating cell from a chemically synthesized genome, put forward 18 recommendations not only for regulating the science ... for educating the public". Opposition On March 13, 2012, over 100 environmental and civil society groups, including Friends of the Earth, the International Center for Technology Assessment, and the ETC Group, issued the manifesto The Principles for the Oversight of Synthetic Biology. This manifesto calls for a worldwide moratorium on the release and commercial use of synthetic organisms until more robust regulations and rigorous biosafety measures are established. The groups specifically call for an outright ban on the use of synthetic biology on the human genome or human microbiome. Richard Lewontin wrote that some of the safety tenets for oversight discussed in The Principles for the Oversight of Synthetic Biology are reasonable, but that the main problem with the recommendations in the manifesto is that "the public at large lacks the ability to enforce any meaningful realization of those recommendations". Health and safety The hazards of synthetic biology include biosafety hazards to workers and the public, biosecurity hazards stemming from deliberate engineering of organisms to cause harm, and environmental hazards. The biosafety hazards are similar to those for existing fields of biotechnology, mainly exposure to pathogens and toxic chemicals, although novel synthetic organisms may have novel risks. For biosecurity, there is concern that synthetic or redesigned organisms could theoretically be used for bioterrorism. Potential risks include recreating known pathogens from scratch, engineering existing pathogens to be more dangerous, and engineering microbes to produce harmful biochemicals. Lastly, environmental hazards include adverse effects on biodiversity and ecosystem services, including potential changes to land use resulting from agricultural use of synthetic organisms. Synthetic biology is an example of a dual-use technology with the potential to be used in ways that could intentionally or unintentionally harm humans and/or damage the environment. Often "scientists, their host institutions and funding bodies" consider whether the planned research could be misused and sometimes implement measures to reduce the likelihood of misuse. Existing risk analysis systems for GMOs are generally considered sufficient for synthetic organisms, although there may be difficulties for an organism built "bottom-up" from individual genetic sequences. Synthetic biology generally falls under existing regulations for GMOs and biotechnology in general, and any regulations that exist for downstream commercial products, although there are generally no regulations in any jurisdiction that are specific to synthetic biology. See also References NHGRI. (2019, March 13). Synthetic Biology. Genome.gov. https://www.genome.gov/about-genomics/policy-issues/Synthetic-Biology Bibliography External links Engineered Pathogens and Unnatural Biological Weapons: The Future Threat of Synthetic Biology . Threats and considerations Synthetic biology books popular science book and textbooks Introductory Summary of Synthetic Biology . Concise overview of synthetic biology concepts, developments and applications Collaborative overview article on Synthetic Biology Controversial DNA startup wants to let customers create creatures (2015-01-03), San Francisco Chronicle It's Alive, But Is It Life: Synthetic Biology and the Future of Creation (28 September 2016), World Science Festival Biotechnology Molecular genetics Systems biology Bioinformatics Biocybernetics Appropriate technology Gene expression programming Bioterrorism
0.774565
0.995606
0.771161
Kinematics
Kinematics is a subfield of physics and mathematics, developed in classical mechanics, that describes the motion of points, bodies (objects), and systems of bodies (groups of objects) without considering the forces that cause them to move. Kinematics, as a field of study, is often referred to as the "geometry of motion" and is occasionally seen as a branch of both applied and pure mathematics since it can be studied without considering the mass of a body or the forces acting upon it. A kinematics problem begins by describing the geometry of the system and declaring the initial conditions of any known values of position, velocity and/or acceleration of points within the system. Then, using arguments from geometry, the position, velocity and acceleration of any unknown parts of the system can be determined. The study of how forces act on bodies falls within kinetics, not kinematics. For further details, see analytical dynamics. Kinematics is used in astrophysics to describe the motion of celestial bodies and collections of such bodies. In mechanical engineering, robotics, and biomechanics, kinematics is used to describe the motion of systems composed of joined parts (multi-link systems) such as an engine, a robotic arm or the human skeleton. Geometric transformations, also called rigid transformations, are used to describe the movement of components in a mechanical system, simplifying the derivation of the equations of motion. They are also central to dynamic analysis. Kinematic analysis is the process of measuring the kinematic quantities used to describe motion. In engineering, for instance, kinematic analysis may be used to find the range of movement for a given mechanism and, working in reverse, using kinematic synthesis to design a mechanism for a desired range of motion. In addition, kinematics applies algebraic geometry to the study of the mechanical advantage of a mechanical system or mechanism. Etymology The term kinematic is the English version of A.M. Ampère's cinématique, which he constructed from the Greek kinema ("movement, motion"), itself derived from kinein ("to move"). Kinematic and cinématique are related to the French word cinéma, but neither are directly derived from it. However, they do share a root word in common, as cinéma came from the shortened form of cinématographe, "motion picture projector and camera", once again from the Greek word for movement and from the Greek grapho ("to write"). Kinematics of a particle trajectory in a non-rotating frame of reference Particle kinematics is the study of the trajectory of particles. The position of a particle is defined as the coordinate vector from the origin of a coordinate frame to the particle. For example, consider a tower 50 m south from your home, where the coordinate frame is centered at your home, such that east is in the direction of the x-axis and north is in the direction of the y-axis, then the coordinate vector to the base of the tower is r = (0 m, −50 m, 0 m). If the tower is 50 m high, and this height is measured along the z-axis, then the coordinate vector to the top of the tower is r = (0 m, −50 m, 50 m). In the most general case, a three-dimensional coordinate system is used to define the position of a particle. However, if the particle is constrained to move within a plane, a two-dimensional coordinate system is sufficient. All observations in physics are incomplete without being described with respect to a reference frame. The position vector of a particle is a vector drawn from the origin of the reference frame to the particle. It expresses both the distance of the point from the origin and its direction from the origin. In three dimensions, the position vector can be expressed as where , , and are the Cartesian coordinates and , and are the unit vectors along the , , and coordinate axes, respectively. The magnitude of the position vector gives the distance between the point and the origin. The direction cosines of the position vector provide a quantitative measure of direction. In general, an object's position vector will depend on the frame of reference; different frames will lead to different values for the position vector. The trajectory of a particle is a vector function of time, , which defines the curve traced by the moving particle, given by where , , and describe each coordinate of the particle's position as a function of time. Velocity and speed The velocity of a particle is a vector quantity that describes the direction as well as the magnitude of motion of the particle. More mathematically, the rate of change of the position vector of a point with respect to time is the velocity of the point. Consider the ratio formed by dividing the difference of two positions of a particle (displacement) by the time interval. This ratio is called the average velocity over that time interval and is defined aswhere is the displacement vector during the time interval . In the limit that the time interval approaches zero, the average velocity approaches the instantaneous velocity, defined as the time derivative of the position vector, Thus, a particle's velocity is the time rate of change of its position. Furthermore, this velocity is tangent to the particle's trajectory at every position along its path. In a non-rotating frame of reference, the derivatives of the coordinate directions are not considered as their directions and magnitudes are constants. The speed of an object is the magnitude of its velocity. It is a scalar quantity: where is the arc-length measured along the trajectory of the particle. This arc-length must always increase as the particle moves. Hence, is non-negative, which implies that speed is also non-negative. Acceleration The velocity vector can change in magnitude and in direction or both at once. Hence, the acceleration accounts for both the rate of change of the magnitude of the velocity vector and the rate of change of direction of that vector. The same reasoning used with respect to the position of a particle to define velocity, can be applied to the velocity to define acceleration. The acceleration of a particle is the vector defined by the rate of change of the velocity vector. The average acceleration of a particle over a time interval is defined as the ratio. where Δv is the average velocity and Δt is the time interval. The acceleration of the particle is the limit of the average acceleration as the time interval approaches zero, which is the time derivative, Alternatively, Thus, acceleration is the first derivative of the velocity vector and the second derivative of the position vector of that particle. In a non-rotating frame of reference, the derivatives of the coordinate directions are not considered as their directions and magnitudes are constants. The magnitude of the acceleration of an object is the magnitude |a| of its acceleration vector. It is a scalar quantity: Relative position vector A relative position vector is a vector that defines the position of one point relative to another. It is the difference in position of the two points. The position of one point A relative to another point B is simply the difference between their positions which is the difference between the components of their position vectors. If point A has position components and point B has position components then the position of point A relative to point B is the difference between their components: Relative velocity The velocity of one point relative to another is simply the difference between their velocities which is the difference between the components of their velocities. If point A has velocity components and point B has velocity components then the velocity of point A relative to point B is the difference between their components: Alternatively, this same result could be obtained by computing the time derivative of the relative position vector rB/A. Relative acceleration The acceleration of one point C relative to another point B is simply the difference between their accelerations. which is the difference between the components of their accelerations. If point C has acceleration components and point B has acceleration components then the acceleration of point C relative to point B is the difference between their components: Alternatively, this same result could be obtained by computing the second time derivative of the relative position vector rB/A. Assuming that the initial conditions of the position, , and velocity at time are known, the first integration yields the velocity of the particle as a function of time. A second integration yields its path (trajectory), Additional relations between displacement, velocity, acceleration, and time can be derived. Since the acceleration is constant, can be substituted into the above equation to give: A relationship between velocity, position and acceleration without explicit time dependence can be had by solving the average acceleration for time and substituting and simplifying where denotes the dot product, which is appropriate as the products are scalars rather than vectors. The dot product can be replaced by the cosine of the angle between the vectors (see Geometric interpretation of the dot product for more details) and the vectors by their magnitudes, in which case: In the case of acceleration always in the direction of the motion and the direction of motion should be in positive or negative, the angle between the vectors is 0, so , and This can be simplified using the notation for the magnitudes of the vectors where can be any curvaceous path taken as the constant tangential acceleration is applied along that path, so This reduces the parametric equations of motion of the particle to a Cartesian relationship of speed versus position. This relation is useful when time is unknown. We also know that or is the area under a velocity–time graph. We can take by adding the top area and the bottom area. The bottom area is a rectangle, and the area of a rectangle is the where is the width and is the height. In this case and (the here is different from the acceleration ). This means that the bottom area is . Now let's find the top area (a triangle). The area of a triangle is where is the base and is the height. In this case, and or . Adding and results in the equation results in the equation . This equation is applicable when the final velocity is unknown. Particle trajectories in cylindrical-polar coordinates It is often convenient to formulate the trajectory of a particle r(t) = (x(t), y(t), z(t)) using polar coordinates in the X–Y plane. In this case, its velocity and acceleration take a convenient form. Recall that the trajectory of a particle P is defined by its coordinate vector r measured in a fixed reference frame F. As the particle moves, its coordinate vector r(t) traces its trajectory, which is a curve in space, given by: where x̂, ŷ, and ẑ are the unit vectors along the x, y and z axes of the reference frame F, respectively. Consider a particle P that moves only on the surface of a circular cylinder r(t) = constant, it is possible to align the z axis of the fixed frame F with the axis of the cylinder. Then, the angle θ around this axis in the x–y plane can be used to define the trajectory as, where the constant distance from the center is denoted as r, and θ(t) is a function of time. The cylindrical coordinates for r(t) can be simplified by introducing the radial and tangential unit vectors, and their time derivatives from elementary calculus: Using this notation, r(t) takes the form, In general, the trajectory r(t) is not constrained to lie on a circular cylinder, so the radius R varies with time and the trajectory of the particle in cylindrical-polar coordinates becomes: Where r, θ, and z might be continuously differentiable functions of time and the function notation is dropped for simplicity. The velocity vector vP is the time derivative of the trajectory r(t), which yields: Similarly, the acceleration aP, which is the time derivative of the velocity vP, is given by: The term acts toward the center of curvature of the path at that point on the path, is commonly called the centripetal acceleration. The term is called the Coriolis acceleration. Constant radius If the trajectory of the particle is constrained to lie on a cylinder, then the radius r is constant and the velocity and acceleration vectors simplify. The velocity of vP is the time derivative of the trajectory r(t), Planar circular trajectories A special case of a particle trajectory on a circular cylinder occurs when there is no movement along the z axis: where r and z0 are constants. In this case, the velocity vP is given by: where is the angular velocity of the unit vector around the z axis of the cylinder. The acceleration aP of the particle P is now given by: The components are called, respectively, the radial and tangential components of acceleration. The notation for angular velocity and angular acceleration is often defined as so the radial and tangential acceleration components for circular trajectories are also written as Point trajectories in a body moving in the plane The movement of components of a mechanical system are analyzed by attaching a reference frame to each part and determining how the various reference frames move relative to each other. If the structural stiffness of the parts are sufficient, then their deformation can be neglected and rigid transformations can be used to define this relative movement. This reduces the description of the motion of the various parts of a complicated mechanical system to a problem of describing the geometry of each part and geometric association of each part relative to other parts. Geometry is the study of the properties of figures that remain the same while the space is transformed in various ways—more technically, it is the study of invariants under a set of transformations. These transformations can cause the displacement of the triangle in the plane, while leaving the vertex angle and the distances between vertices unchanged. Kinematics is often described as applied geometry, where the movement of a mechanical system is described using the rigid transformations of Euclidean geometry. The coordinates of points in a plane are two-dimensional vectors in R2 (two dimensional space). Rigid transformations are those that preserve the distance between any two points. The set of rigid transformations in an n-dimensional space is called the special Euclidean group on Rn, and denoted SE(n). Displacements and motion The position of one component of a mechanical system relative to another is defined by introducing a reference frame, say M, on one that moves relative to a fixed frame, F, on the other. The rigid transformation, or displacement, of M relative to F defines the relative position of the two components. A displacement consists of the combination of a rotation and a translation. The set of all displacements of M relative to F is called the configuration space of M. A smooth curve from one position to another in this configuration space is a continuous set of displacements, called the motion of M relative to F. The motion of a body consists of a continuous set of rotations and translations. Matrix representation The combination of a rotation and translation in the plane R2 can be represented by a certain type of 3×3 matrix known as a homogeneous transform. The 3×3 homogeneous transform is constructed from a 2×2 rotation matrix A(φ) and the 2×1 translation vector d = (dx, dy), as: These homogeneous transforms perform rigid transformations on the points in the plane z = 1, that is, on points with coordinates r = (x, y, 1). In particular, let r define the coordinates of points in a reference frame M coincident with a fixed frame F. Then, when the origin of M is displaced by the translation vector d relative to the origin of F and rotated by the angle φ relative to the x-axis of F, the new coordinates in F of points in M are given by: Homogeneous transforms represent affine transformations. This formulation is necessary because a translation is not a linear transformation of R2. However, using projective geometry, so that R2 is considered a subset of R3, translations become affine linear transformations. Pure translation If a rigid body moves so that its reference frame M does not rotate (θ = 0) relative to the fixed frame F, the motion is called pure translation. In this case, the trajectory of every point in the body is an offset of the trajectory d(t) of the origin of M, that is: Thus, for bodies in pure translation, the velocity and acceleration of every point P in the body are given by: where the dot denotes the derivative with respect to time and vO and aO are the velocity and acceleration, respectively, of the origin of the moving frame M. Recall the coordinate vector p in M is constant, so its derivative is zero. Rotation of a body around a fixed axis Rotational or angular kinematics is the description of the rotation of an object. In what follows, attention is restricted to simple rotation about an axis of fixed orientation. The z-axis has been chosen for convenience. Position This allows the description of a rotation as the angular position of a planar reference frame M relative to a fixed F about this shared z-axis. Coordinates p = (x, y) in M are related to coordinates P = (X, Y) in F by the matrix equation: where is the rotation matrix that defines the angular position of M relative to F as a function of time. Velocity If the point p does not move in M, its velocity in F is given by It is convenient to eliminate the coordinates p and write this as an operation on the trajectory P(t), where the matrix is known as the angular velocity matrix of M relative to F. The parameter ω is the time derivative of the angle θ, that is: Acceleration The acceleration of P(t) in F is obtained as the time derivative of the velocity, which becomes where is the angular acceleration matrix of M on F, and The description of rotation then involves these three quantities: Angular position: the oriented distance from a selected origin on the rotational axis to a point of an object is a vector r(t) locating the point. The vector r(t) has some projection (or, equivalently, some component) r⊥(t) on a plane perpendicular to the axis of rotation. Then the angular position of that point is the angle θ from a reference axis (typically the positive x-axis) to the vector r⊥(t) in a known rotation sense (typically given by the right-hand rule). Angular velocity: the angular velocity ω is the rate at which the angular position θ changes with respect to time t: The angular velocity is represented in Figure 1 by a vector Ω pointing along the axis of rotation with magnitude ω and sense determined by the direction of rotation as given by the right-hand rule. Angular acceleration: the magnitude of the angular acceleration α is the rate at which the angular velocity ω changes with respect to time t: The equations of translational kinematics can easily be extended to planar rotational kinematics for constant angular acceleration with simple variable exchanges: Here θi and θf are, respectively, the initial and final angular positions, ωi and ωf are, respectively, the initial and final angular velocities, and α is the constant angular acceleration. Although position in space and velocity in space are both true vectors (in terms of their properties under rotation), as is angular velocity, angle itself is not a true vector. Point trajectories in body moving in three dimensions Important formulas in kinematics define the velocity and acceleration of points in a moving body as they trace trajectories in three-dimensional space. This is particularly important for the center of mass of a body, which is used to derive equations of motion using either Newton's second law or Lagrange's equations. Position In order to define these formulas, the movement of a component B of a mechanical system is defined by the set of rotations [A(t)] and translations d(t) assembled into the homogeneous transformation [T(t)]=[A(t), d(t)]. If p is the coordinates of a point P in B measured in the moving reference frame M, then the trajectory of this point traced in F is given by: This notation does not distinguish between P = (X, Y, Z, 1), and P = (X, Y, Z), which is hopefully clear in context. This equation for the trajectory of P can be inverted to compute the coordinate vector p in M as: This expression uses the fact that the transpose of a rotation matrix is also its inverse, that is: Velocity The velocity of the point P along its trajectory P(t) is obtained as the time derivative of this position vector, The dot denotes the derivative with respect to time; because p is constant, its derivative is zero. This formula can be modified to obtain the velocity of P by operating on its trajectory P(t) measured in the fixed frame F. Substituting the inverse transform for p into the velocity equation yields: The matrix [S] is given by: where is the angular velocity matrix. Multiplying by the operator [S], the formula for the velocity vP takes the form: where the vector ω is the angular velocity vector obtained from the components of the matrix [Ω]; the vector is the position of P relative to the origin O of the moving frame M; and is the velocity of the origin O. Acceleration The acceleration of a point P in a moving body B is obtained as the time derivative of its velocity vector: This equation can be expanded firstly by computing and The formula for the acceleration AP can now be obtained as: or where α is the angular acceleration vector obtained from the derivative of the angular velocity matrix; is the relative position vector (the position of P relative to the origin O of the moving frame M); and is the acceleration of the origin of the moving frame M. Kinematic constraints Kinematic constraints are constraints on the movement of components of a mechanical system. Kinematic constraints can be considered to have two basic forms, (i) constraints that arise from hinges, sliders and cam joints that define the construction of the system, called holonomic constraints, and (ii) constraints imposed on the velocity of the system such as the knife-edge constraint of ice-skates on a flat plane, or rolling without slipping of a disc or sphere in contact with a plane, which are called non-holonomic constraints. The following are some common examples. Kinematic coupling A kinematic coupling exactly constrains all 6 degrees of freedom. Rolling without slipping An object that rolls against a surface without slipping obeys the condition that the velocity of its center of mass is equal to the cross product of its angular velocity with a vector from the point of contact to the center of mass: For the case of an object that does not tip or turn, this reduces to . Inextensible cord This is the case where bodies are connected by an idealized cord that remains in tension and cannot change length. The constraint is that the sum of lengths of all segments of the cord is the total length, and accordingly the time derivative of this sum is zero. A dynamic problem of this type is the pendulum. Another example is a drum turned by the pull of gravity upon a falling weight attached to the rim by the inextensible cord. An equilibrium problem (i.e. not kinematic) of this type is the catenary. Kinematic pairs Reuleaux called the ideal connections between components that form a machine kinematic pairs. He distinguished between higher pairs which were said to have line contact between the two links and lower pairs that have area contact between the links. J. Phillips shows that there are many ways to construct pairs that do not fit this simple classification. Lower pair A lower pair is an ideal joint, or holonomic constraint, that maintains contact between a point, line or plane in a moving solid (three-dimensional) body to a corresponding point line or plane in the fixed solid body. There are the following cases: A revolute pair, or hinged joint, requires a line, or axis, in the moving body to remain co-linear with a line in the fixed body, and a plane perpendicular to this line in the moving body maintain contact with a similar perpendicular plane in the fixed body. This imposes five constraints on the relative movement of the links, which therefore has one degree of freedom, which is pure rotation about the axis of the hinge. A prismatic joint, or slider, requires that a line, or axis, in the moving body remain co-linear with a line in the fixed body, and a plane parallel to this line in the moving body maintain contact with a similar parallel plane in the fixed body. This imposes five constraints on the relative movement of the links, which therefore has one degree of freedom. This degree of freedom is the distance of the slide along the line. A cylindrical joint requires that a line, or axis, in the moving body remain co-linear with a line in the fixed body. It is a combination of a revolute joint and a sliding joint. This joint has two degrees of freedom. The position of the moving body is defined by both the rotation about and slide along the axis. A spherical joint, or ball joint, requires that a point in the moving body maintain contact with a point in the fixed body. This joint has three degrees of freedom. A planar joint requires that a plane in the moving body maintain contact with a plane in fixed body. This joint has three degrees of freedom. Higher pairs Generally speaking, a higher pair is a constraint that requires a curve or surface in the moving body to maintain contact with a curve or surface in the fixed body. For example, the contact between a cam and its follower is a higher pair called a cam joint. Similarly, the contact between the involute curves that form the meshing teeth of two gears are cam joints. Kinematic chains Rigid bodies ("links") connected by kinematic pairs ("joints") are known as kinematic chains. Mechanisms and robots are examples of kinematic chains. The degree of freedom of a kinematic chain is computed from the number of links and the number and type of joints using the mobility formula. This formula can also be used to enumerate the topologies of kinematic chains that have a given degree of freedom, which is known as type synthesis in machine design. Examples The planar one degree-of-freedom linkages assembled from N links and j hinges or sliding joints are: N = 2, j = 1 : a two-bar linkage that is the lever; N = 4, j = 4 : the four-bar linkage; N = 6, j = 7 : a six-bar linkage. This must have two links ("ternary links") that support three joints. There are two distinct topologies that depend on how the two ternary linkages are connected. In the Watt topology, the two ternary links have a common joint; in the Stephenson topology, the two ternary links do not have a common joint and are connected by binary links. N = 8, j = 10 : eight-bar linkage with 16 different topologies; N = 10, j = 13 : ten-bar linkage with 230 different topologies; N = 12, j = 16 : twelve-bar linkage with 6,856 topologies. For larger chains and their linkage topologies, see R. P. Sunkari and L. C. Schmidt, "Structural synthesis of planar kinematic chains by adapting a Mckay-type algorithm", Mechanism and Machine Theory #41, pp. 1021–1030 (2006). See also Absement Acceleration Analytical mechanics Applied mechanics Celestial mechanics Centripetal force Classical mechanics Distance Dynamics (physics) Fictitious force Forward kinematics Four-bar linkage Inverse kinematics Jerk (physics) Kepler's laws Kinematic coupling Kinematic diagram Kinematic synthesis Kinetics (physics) Motion (physics) Orbital mechanics Statics Velocity Integral kinematics Chebychev–Grübler–Kutzbach criterion References Further reading Eduard Study (1913) D.H. Delphenich translator, "Foundations and goals of analytical kinematics". External links Java applet of 1D kinematics Physclips: Mechanics with animations and video clips from the University of New South Wales. Kinematic Models for Design Digital Library (KMODDL), featuring movies and photos of hundreds of working models of mechanical systems at Cornell University and an e-book library of classic texts on mechanical design and engineering. Micro-Inch Positioning with Kinematic Components Classical mechanics Mechanisms (engineering)
0.773096
0.99747
0.77114
Metabolomics
Metabolomics is the scientific study of chemical processes involving metabolites, the small molecule substrates, intermediates, and products of cell metabolism. Specifically, metabolomics is the "systematic study of the unique chemical fingerprints that specific cellular processes leave behind", the study of their small-molecule metabolite profiles. The metabolome represents the complete set of metabolites in a biological cell, tissue, organ, or organism, which are the end products of cellular processes. Messenger RNA (mRNA), gene expression data, and proteomic analyses reveal the set of gene products being produced in the cell, data that represents one aspect of cellular function. Conversely, metabolic profiling can give an instantaneous snapshot of the physiology of that cell, and thus, metabolomics provides a direct "functional readout of the physiological state" of an organism. There are indeed quantifiable correlations between the metabolome and the other cellular ensembles (genome, transcriptome, proteome, and lipidome), which can be used to predict metabolite abundances in biological samples from, for example mRNA abundances. One of the ultimate challenges of systems biology is to integrate metabolomics with all other -omics information to provide a better understanding of cellular biology. History The concept that individuals might have a "metabolic profile" that could be reflected in the makeup of their biological fluids was introduced by Roger Williams in the late 1940s, who used paper chromatography to suggest characteristic metabolic patterns in urine and saliva were associated with diseases such as schizophrenia. However, it was only through technological advancements in the 1960s and 1970s that it became feasible to quantitatively (as opposed to qualitatively) measure metabolic profiles. The term "metabolic profile" was introduced by Horning, et al. in 1971 after they demonstrated that gas chromatography-mass spectrometry (GC-MS) could be used to measure compounds present in human urine and tissue extracts. The Horning group, along with that of Linus Pauling and Arthur B. Robinson led the development of GC-MS methods to monitor the metabolites present in urine through the 1970s. Concurrently, NMR spectroscopy, which was discovered in the 1940s, was also undergoing rapid advances. In 1974, Seeley et al. demonstrated the utility of using NMR to detect metabolites in unmodified biological samples. This first study on muscle highlighted the value of NMR in that it was determined that 90% of cellular ATP is complexed with magnesium. As sensitivity has improved with the evolution of higher magnetic field strengths and magic angle spinning, NMR continues to be a leading analytical tool to investigate metabolism. Recent efforts to utilize NMR for metabolomics have been largely driven by the laboratory of Jeremy K. Nicholson at Birkbeck College, University of London and later at Imperial College London. In 1984, Nicholson showed 1H NMR spectroscopy could potentially be used to diagnose diabetes mellitus, and later pioneered the application of pattern recognition methods to NMR spectroscopic data. In 1994 and 1996, liquid chromatography mass spectrometry metabolomics experiments were performed by Gary Siuzdak while working with Richard Lerner (then president of the Scripps Research Institute) and Benjamin Cravatt, to analyze the cerebral spinal fluid from sleep deprived animals. One molecule of particular interest, oleamide, was observed and later shown to have sleep inducing properties. This work is one of the earliest such experiments combining liquid chromatography and mass spectrometry in metabolomics. In 2005, the first metabolomics tandem mass spectrometry database, METLIN, for characterizing human metabolites was developed in the Siuzdak laboratory at the Scripps Research Institute. METLIN has since grown and as of December, 2023, METLIN contains MS/MS experimental data on over 930,000 molecular standards and other chemical entities, each compound having experimental tandem mass spectrometry data generated from molecular standards at multiple collision energies and in positive and negative ionization modes. METLIN is the largest repository of tandem mass spectrometry data of its kind. The dedicated academic journal Metabolomics first appeared in 2005, founded by its current editor-in-chief Roy Goodacre. In 2005, the Siuzdak lab was engaged in identifying metabolites associated with sepsis and in an effort to address the issue of statistically identifying the most relevant dysregulated metabolites across hundreds of LC/MS datasets, the first algorithm was developed to allow for the nonlinear alignment of mass spectrometry metabolomics data. Called XCMS, it has since (2012) been developed as an online tool and as of 2019 (with METLIN) has over 30,000 registered users. On 23 January 2007, the Human Metabolome Project, led by David S. Wishart, completed the first draft of the human metabolome, consisting of a database of approximately 2,500 metabolites, 1,200 drugs and 3,500 food components. Similar projects have been underway in several plant species, most notably Medicago truncatula and Arabidopsis thaliana for several years. As late as mid-2010, metabolomics was still considered an "emerging field". Further, it was noted that further progress in the field depended in large part, through addressing otherwise "irresolvable technical challenges", by technical evolution of mass spectrometry instrumentation. In 2015, real-time metabolome profiling was demonstrated for the first time. Metabolome The metabolome refers to the complete set of small-molecule (<1.5 kDa) metabolites (such as metabolic intermediates, hormones and other signaling molecules, and secondary metabolites) to be found within a biological sample, such as a single organism. The word was coined in analogy with transcriptomics and proteomics; like the transcriptome and the proteome, the metabolome is dynamic, changing from second to second. Although the metabolome can be defined readily enough, it is not currently possible to analyse the entire range of metabolites by a single analytical method. In January 2007, scientists at the University of Alberta and the University of Calgary completed the first draft of the human metabolome. The Human Metabolome Database (HMDB) is perhaps the most extensive public metabolomic spectral database to date and is a freely available electronic database (www.hmdb.ca) containing detailed information about small molecule metabolites found in the human body. It is intended to be used for applications in metabolomics, clinical chemistry, biomarker discovery and general education. The database is designed to contain or link three kinds of data: Chemical data, Clinical data and Molecular biology/biochemistry data. The database contains 220,945 metabolite entries including both water-soluble and lipid soluble metabolites. Additionally, 8,610 protein sequences (enzymes and transporters) are linked to these metabolite entries. Each MetaboCard entry contains 130 data fields with 2/3 of the information being devoted to chemical/clinical data and the other 1/3 devoted to enzymatic or biochemical data. The version 3.5 of the HMDB contains >16,000 endogenous metabolites, >1,500 drugs and >22,000 food constituents or food metabolites. This information, available at the Human Metabolome Database and based on analysis of information available in the current scientific literature, is far from complete. In contrast, much more is known about the metabolomes of other organisms. For example, over 50,000 metabolites have been characterized from the plant kingdom, and many thousands of metabolites have been identified and/or characterized from single plants. Each type of cell and tissue has a unique metabolic ‘fingerprint’ that can elucidate organ or tissue-specific information. Bio-specimens used for metabolomics analysis include but not limit to plasma, serum, urine, saliva, feces, muscle, sweat, exhaled breath and gastrointestinal fluid. The ease of collection facilitates high temporal resolution, and because they are always at dynamic equilibrium with the body, they can describe the host as a whole. Genome can tell what could happen, transcriptome can tell what appears to be happening, proteome can tell what makes it happen and metabolome can tell what has happened and what is happening. Metabolites Metabolites are the substrates, intermediates and products of metabolism. Within the context of metabolomics, a metabolite is usually defined as any molecule less than 1.5 kDa in size. However, there are exceptions to this depending on the sample and detection method. For example, macromolecules such as lipoproteins and albumin are reliably detected in NMR-based metabolomics studies of blood plasma. In plant-based metabolomics, it is common to refer to "primary" and "secondary" metabolites. A primary metabolite is directly involved in the normal growth, development, and reproduction. A secondary metabolite is not directly involved in those processes, but usually has important ecological function. Examples include antibiotics and pigments. By contrast, in human-based metabolomics, it is more common to describe metabolites as being either endogenous (produced by the host organism) or exogenous. Metabolites of foreign substances such as drugs are termed xenometabolites. The metabolome forms a large network of metabolic reactions, where outputs from one enzymatic chemical reaction are inputs to other chemical reactions. Such systems have been described as hypercycles. Metabonomics Metabonomics is defined as "the quantitative measurement of the dynamic multiparametric metabolic response of living systems to pathophysiological stimuli or genetic modification". The word origin is from the Greek μεταβολή meaning change and nomos meaning a rule set or set of laws. This approach was pioneered by Jeremy Nicholson at Murdoch University and has been used in toxicology, disease diagnosis and a number of other fields. Historically, the metabonomics approach was one of the first methods to apply the scope of systems biology to studies of metabolism. There has been some disagreement over the exact differences between 'metabolomics' and 'metabonomics'. The difference between the two terms is not related to choice of analytical platform: although metabonomics is more associated with NMR spectroscopy and metabolomics with mass spectrometry-based techniques, this is simply because of usages amongst different groups that have popularized the different terms. While there is still no absolute agreement, there is a growing consensus that 'metabolomics' places a greater emphasis on metabolic profiling at a cellular or organ level and is primarily concerned with normal endogenous metabolism. 'Metabonomics' extends metabolic profiling to include information about perturbations of metabolism caused by environmental factors (including diet and toxins), disease processes, and the involvement of extragenomic influences, such as gut microflora. This is not a trivial difference; metabolomic studies should, by definition, exclude metabolic contributions from extragenomic sources, because these are external to the system being studied. However, in practice, within the field of human disease research there is still a large degree of overlap in the way both terms are used, and they are often in effect synonymous. Exometabolomics Exometabolomics, or "metabolic footprinting", is the study of extracellular metabolites. It uses many techniques from other subfields of metabolomics, and has applications in biofuel development, bioprocessing, determining drugs' mechanism of action, and studying intercellular interactions. Analytical technologies The typical workflow of metabolomics studies is shown in the figure. First, samples are collected from tissue, plasma, urine, saliva, cells, etc. Next, metabolites extracted often with the addition of internal standards and derivatization. During sample analysis, metabolites are quantified (liquid chromatography or gas chromatography coupled with MS and/or NMR spectroscopy). The raw output data can be used for metabolite feature extraction and further processed before statistical analysis (such as principal component analysis, PCA). Many bioinformatic tools and software are available to identify associations with disease states and outcomes, determine significant correlations, and characterize metabolic signatures with existing biological knowledge. Separation methods Initially, analytes in a metabolomic sample comprise a highly complex mixture. This complex mixture can be simplified prior to detection by separating some analytes from others. Separation achieves various goals: analytes which cannot be resolved by the detector may be separated in this step; in MS analysis, ion suppression is reduced; the retention time of the analyte serves as information regarding its identity. This separation step is not mandatory and is often omitted in NMR and "shotgun" based approaches such as shotgun lipidomics. Gas chromatography (GC), especially when interfaced with mass spectrometry (GC-MS), is a widely used separation technique for metabolomic analysis. GC offers very high chromatographic resolution, and can be used in conjunction with a flame ionization detector (GC/FID) or a mass spectrometer (GC-MS). The method is especially useful for identification and quantification of small and volatile molecules. However, a practical limitation of GC is the requirement of chemical derivatization for many biomolecules as only volatile chemicals can be analysed without derivatization. In cases where greater resolving power is required, two-dimensional chromatography (GCxGC) can be applied. High performance liquid chromatography (HPLC) has emerged as the most common separation technique for metabolomic analysis. With the advent of electrospray ionization, HPLC was coupled to MS. In contrast with GC, HPLC has lower chromatographic resolution, but requires no derivatization for polar molecules, and separates molecules in the liquid phase. Additionally HPLC has the advantage that a much wider range of analytes can be measured with a higher sensitivity than GC methods. Capillary electrophoresis (CE) has a higher theoretical separation efficiency than HPLC (although requiring much more time per separation), and is suitable for use with a wider range of metabolite classes than is GC. As for all electrophoretic techniques, it is most appropriate for charged analytes. Detection methods Mass spectrometry (MS) is used to identify and quantify metabolites after optional separation by GC, HPLC, or CE. GC-MS was the first hyphenated technique to be developed. Identification leverages the distinct patterns in which analytes fragment. These patterns can be thought of as a mass spectral fingerprint. Libraries exist that allow identification of a metabolite according to this fragmentation pattern . MS is both sensitive and can be very specific. There are also a number of techniques which use MS as a stand-alone technology: the sample is infused directly into the mass spectrometer with no prior separation, and the MS provides sufficient selectivity to both separate and to detect metabolites. For analysis by mass spectrometry, the analytes must be imparted with a charge and transferred to the gas phase. Electron ionization (EI) is the most common ionization technique applied to GC separations as it is amenable to low pressures. EI also produces fragmentation of the analyte, both providing structural information while increasing the complexity of the data and possibly obscuring the molecular ion. Atmospheric-pressure chemical ionization (APCI) is an atmospheric pressure technique that can be applied to all the above separation techniques. APCI is a gas phase ionization method, which provides slightly more aggressive ionization than ESI which is suitable for less polar compounds. Electrospray ionization (ESI) is the most common ionization technique applied in LC/MS. This soft ionization is most successful for polar molecules with ionizable functional groups. Another commonly used soft ionization technique is secondary electrospray ionization (SESI). In the 2000s, surface-based mass analysis has seen a resurgence, with new MS technologies focused on increasing sensitivity, minimizing background, and reducing sample preparation. The ability to analyze metabolites directly from biofluids and tissues continues to challenge current MS technology, largely because of the limits imposed by the complexity of these samples, which contain thousands to tens of thousands of metabolites. Among the technologies being developed to address this challenge is Nanostructure-Initiator MS (NIMS), a desorption/ ionization approach that does not require the application of matrix and thereby facilitates small-molecule (i.e., metabolite) identification. MALDI is also used; however, the application of a MALDI matrix can add significant background at that complicates analysis of the low-mass range (i.e., metabolites). In addition, the size of the resulting matrix crystals limits the spatial resolution that can be achieved in tissue imaging. Because of these limitations, several other matrix-free desorption/ionization approaches have been applied to the analysis of biofluids and tissues. Secondary ion mass spectrometry (SIMS) was one of the first matrix-free desorption/ionization approaches used to analyze metabolites from biological samples. SIMS uses a high-energy primary ion beam to desorb and generate secondary ions from a surface. The primary advantage of SIMS is its high spatial resolution (as small as 50 nm), a powerful characteristic for tissue imaging with MS. However, SIMS has yet to be readily applied to the analysis of biofluids and tissues because of its limited sensitivity at and analyte fragmentation generated by the high-energy primary ion beam. Desorption electrospray ionization (DESI) is a matrix-free technique for analyzing biological samples that uses a charged solvent spray to desorb ions from a surface. Advantages of DESI are that no special surface is required and the analysis is performed at ambient pressure with full access to the sample during acquisition. A limitation of DESI is spatial resolution because "focusing" the charged solvent spray is difficult. However, a recent development termed laser ablation ESI (LAESI) is a promising approach to circumvent this limitation. Most recently, ion trap techniques such as orbitrap mass spectrometry are also applied to metabolomics research. Nuclear magnetic resonance (NMR) spectroscopy is the only detection technique which does not rely on separation of the analytes, and the sample can thus be recovered for further analyses. All kinds of small molecule metabolites can be measured simultaneously - in this sense, NMR is close to being a universal detector. The main advantages of NMR are high analytical reproducibility and simplicity of sample preparation. Practically, however, it is relatively insensitive compared to mass spectrometry-based techniques. Although NMR and MS are the most widely used modern-day techniques for detection, there are other methods in use. These include Fourier-transform ion cyclotron resonance, ion-mobility spectrometry, electrochemical detection (coupled to HPLC), Raman spectroscopy and radiolabel (when combined with thin-layer chromatography). Statistical methods The data generated in metabolomics usually consist of measurements performed on subjects under various conditions. These measurements may be digitized spectra, or a list of metabolite features. In its simplest form, this generates a matrix with rows corresponding to subjects and columns corresponding with metabolite features (or vice versa). Several statistical programs are currently available for analysis of both NMR and mass spectrometry data. A great number of free software are already available for the analysis of metabolomics data shown in the table. Some statistical tools listed in the table were designed for NMR data analyses were also useful for MS data. For mass spectrometry data, software is available that identifies molecules that vary in subject groups on the basis of mass-over-charge value and sometimes retention time depending on the experimental design. Once metabolite data matrix is determined, unsupervised data reduction techniques (e.g. PCA) can be used to elucidate patterns and connections. In many studies, including those evaluating drug-toxicity and some disease models, the metabolites of interest are not known a priori. This makes unsupervised methods, those with no prior assumptions of class membership, a popular first choice. The most common of these methods includes principal component analysis (PCA) which can efficiently reduce the dimensions of a dataset to a few which explain the greatest variation. When analyzed in the lower-dimensional PCA space, clustering of samples with similar metabolic fingerprints can be detected. PCA algorithms aim to replace all correlated variables with a much smaller number of uncorrelated variables (referred to as principal components (PCs)) and retain most of the information in the original dataset. This clustering can elucidate patterns and assist in the determination of disease biomarkers – metabolites that correlate most with class membership. Linear models are commonly used for metabolomics data, but are affected by multicollinearity. On the other hand, multivariate statistics are thriving methods for high-dimensional correlated metabolomics data, of which the most popular one is Projection to Latent Structures (PLS) regression and its classification version PLS-DA. Other data mining methods, such as random forest, support-vector machines, etc. are received increasing attention for untargeted metabolomics data analysis. In the case of univariate methods, variables are analyzed one by one using classical statistics tools (such as Student's t-test, ANOVA or mixed models) and only these with sufficient small p-values are considered relevant. However, correction strategies should be used to reduce false discoveries when multiple comparisons are conducted since there is no standard method for measuring the total amount of metabolites directly in untargeted metabolomics. For multivariate analysis, models should always be validated to ensure that the results can be generalized. Machine learning and data mining Machine learning is a powerful tool that can be used in metabolomics analysis. Recently, scientists have developed retention time prediction software. These tools allow researchers to apply artificial intelligence to the retention time prediction of small molecules in complex mixture, such as human plasma, plant extracts, foods, or microbial cultures. Retention time prediction increases the identification rate in liquid chromatography and can lead to an improved biological interpretation of metabolomics data. Key applications Toxicity assessment/toxicology by metabolic profiling (especially of urine or blood plasma samples) detects the physiological changes caused by toxic insult of a chemical (or mixture of chemicals). In many cases, the observed changes can be related to specific syndromes, e.g. a specific lesion in liver or kidney. This is of particular relevance to pharmaceutical companies wanting to test the toxicity of potential drug candidates: if a compound can be eliminated before it reaches clinical trials on the grounds of adverse toxicity, it saves the enormous expense of the trials. For functional genomics, metabolomics can be an excellent tool for determining the phenotype caused by a genetic manipulation, such as gene deletion or insertion. Sometimes this can be a sufficient goal in itself—for instance, to detect any phenotypic changes in a genetically modified plant intended for human or animal consumption. More exciting is the prospect of predicting the function of unknown genes by comparison with the metabolic perturbations caused by deletion/insertion of known genes. Such advances are most likely to come from model organisms such as Saccharomyces cerevisiae and Arabidopsis thaliana. The Cravatt laboratory at the Scripps Research Institute has recently applied this technology to mammalian systems, identifying the N-acyltaurines as previously uncharacterized endogenous substrates for the enzyme fatty acid amide hydrolase (FAAH) and the monoalkylglycerol ethers (MAGEs) as endogenous substrates for the uncharacterized hydrolase KIAA1363. Metabologenomics is a novel approach to integrate metabolomics and genomics data by correlating microbial-exported metabolites with predicted biosynthetic genes. This bioinformatics-based pairing method enables natural product discovery at a larger-scale by refining non-targeted metabolomic analyses to identify small molecules with related biosynthesis and to focus on those that may not have previously well known structures. Fluxomics is a further development of metabolomics. The disadvantage of metabolomics is that it only provides the user with abundances or concentrations of metabolites, while fluxomics determines the reaction rates of metabolic reactions and can trace metabolites in a biological system over time. Nutrigenomics is a generalised term which links genomics, transcriptomics, proteomics and metabolomics to human nutrition. In general, in a given body fluid, a metabolome is influenced by endogenous factors such as age, sex, body composition and genetics as well as underlying pathologies. The large bowel microflora are also a very significant potential confounder of metabolic profiles and could be classified as either an endogenous or exogenous factor. The main exogenous factors are diet and drugs. Diet can then be broken down to nutrients and non-nutrients. Metabolomics is one means to determine a biological endpoint, or metabolic fingerprint, which reflects the balance of all these forces on an individual's metabolism. Thanks to recent cost reductions, metabolomics has now become accessible for companion animals, such as pregnant dogs. Plant metabolomics is designed to study the overall changes in metabolites of plant samples and then conduct deep data mining and chemometric analysis. Specialized metabolites are considered components of plant defense systems biosynthesized in response to biotic and abiotic stresses. Metabolomics approaches have recently been used to assess the natural variance in metabolite content between individual plants, an approach with great potential for the improvement of the compositional quality of crops. See also Epigenomics Fluxomics Genomics Lipidomics Molecular epidemiology Molecular medicine Molecular pathology Precision medicine Proteomics Transcriptomics XCMS Online, a bioinformatics software designed for statistical analysis of mass spectrometry data References Further reading External links Human Metabolome Database (HMDB) METLIN XCMS LCMStats Metabolights NIH Common Fund Metabolomics Consortium Metabolomics Workbench Golm Metabolome Database Metabolon Metabolism Systems biology Omics
0.779207
0.989601
0.771104
Analytical mechanics
In theoretical physics and mathematical physics, analytical mechanics, or theoretical mechanics is a collection of closely related formulations of classical mechanics. Analytical mechanics uses scalar properties of motion representing the system as a whole—usually its kinetic energy and potential energy. The equations of motion are derived from the scalar quantity by some underlying principle about the scalar's variation. Analytical mechanics was developed by many scientists and mathematicians during the 18th century and onward, after Newtonian mechanics. Newtonian mechanics considers vector quantities of motion, particularly accelerations, momenta, forces, of the constituents of the system; it can also be called vectorial mechanics. A scalar is a quantity, whereas a vector is represented by quantity and direction. The results of these two different approaches are equivalent, but the analytical mechanics approach has many advantages for complex problems. Analytical mechanics takes advantage of a system's constraints to solve problems. The constraints limit the degrees of freedom the system can have, and can be used to reduce the number of coordinates needed to solve for the motion. The formalism is well suited to arbitrary choices of coordinates, known in the context as generalized coordinates. The kinetic and potential energies of the system are expressed using these generalized coordinates or momenta, and the equations of motion can be readily set up, thus analytical mechanics allows numerous mechanical problems to be solved with greater efficiency than fully vectorial methods. It does not always work for non-conservative forces or dissipative forces like friction, in which case one may revert to Newtonian mechanics. Two dominant branches of analytical mechanics are Lagrangian mechanics (using generalized coordinates and corresponding generalized velocities in configuration space) and Hamiltonian mechanics (using coordinates and corresponding momenta in phase space). Both formulations are equivalent by a Legendre transformation on the generalized coordinates, velocities and momenta; therefore, both contain the same information for describing the dynamics of a system. There are other formulations such as Hamilton–Jacobi theory, Routhian mechanics, and Appell's equation of motion. All equations of motion for particles and fields, in any formalism, can be derived from the widely applicable result called the principle of least action. One result is Noether's theorem, a statement which connects conservation laws to their associated symmetries. Analytical mechanics does not introduce new physics and is not more general than Newtonian mechanics. Rather it is a collection of equivalent formalisms which have broad application. In fact the same principles and formalisms can be used in relativistic mechanics and general relativity, and with some modifications, quantum mechanics and quantum field theory. Analytical mechanics is used widely, from fundamental physics to applied mathematics, particularly chaos theory. The methods of analytical mechanics apply to discrete particles, each with a finite number of degrees of freedom. They can be modified to describe continuous fields or fluids, which have infinite degrees of freedom. The definitions and equations have a close analogy with those of mechanics. Motivation for analytical mechanics The goal of mechanical theory is to solve mechanical problems, such as arise in physics and engineering. Starting from a physical system—such as a mechanism or a star system—a mathematical model is developed in the form of a differential equation. The model can be solved numerically or analytically to determine the motion of the system. Newton's vectorial approach to mechanics describes motion with the help of vector quantities such as force, velocity, acceleration. These quantities characterise the motion of a body idealised as a "mass point" or a "particle" understood as a single point to which a mass is attached. Newton's method has been successfully applied to a wide range of physical problems, including the motion of a particle in Earth's gravitational field and the motion of planets around the Sun. In this approach, Newton's laws describe the motion by a differential equation and then the problem is reduced to the solving of that equation. When a mechanical system contains many particles, however (such as a complex mechanism or a fluid), Newton's approach is difficult to apply. Using a Newtonian approach is possible, under proper precautions, namely isolating each single particle from the others, and determining all the forces acting on it. Such analysis is cumbersome even in relatively simple systems. Newton thought that his third law "action equals reaction" would take care of all complications. This is false even for such simple system as rotations of a solid body. In more complicated systems, the vectorial approach cannot give an adequate description. The analytical approach simplifies problems by treating mechanical systems as ensembles of particles that interact with each other, rather considering each particle as an isolated unit. In the vectorial approach, forces must be determined individually for each particle, whereas in the analytical approach it is enough to know one single function which contains implicitly all the forces acting on and in the system. Such simplification is often done using certain kinematic conditions which are stated a priori. However, the analytical treatment does not require the knowledge of these forces and takes these kinematic conditions for granted. Still, deriving the equations of motion of a complicated mechanical system requires a unifying basis from which they follow. This is provided by various variational principles: behind each set of equations there is a principle that expresses the meaning of the entire set. Given a fundamental and universal quantity called action, the principle that this action be stationary under small variation of some other mechanical quantity generates the required set of differential equations. The statement of the principle does not require any special coordinate system, and all results are expressed in generalized coordinates. This means that the analytical equations of motion do not change upon a coordinate transformation, an invariance property that is lacking in the vectorial equations of motion. It is not altogether clear what is meant by 'solving' a set of differential equations. A problem is regarded as solved when the particles coordinates at time t are expressed as simple functions of t and of parameters defining the initial positions and velocities. However, 'simple function' is not a well-defined concept: nowadays, a function f(t) is not regarded as a formal expression in t (elementary function) as in the time of Newton but most generally as a quantity determined by t, and it is not possible to draw a sharp line between 'simple' and 'not simple' functions. If one speaks merely of 'functions', then every mechanical problem is solved as soon as it has been well stated in differential equations, because given the initial conditions and t determine the coordinates at t. This is a fact especially at present with the modern methods of computer modelling which provide arithmetical solutions to mechanical problems to any desired degree of accuracy, the differential equations being replaced by difference equations. Still, though lacking precise definitions, it is obvious that the two-body problem has a simple solution, whereas the three-body problem has not. The two-body problem is solved by formulas involving parameters; their values can be changed to study the class of all solutions, that is, the mathematical structure of the problem. Moreover, an accurate mental or drawn picture can be made for the motion of two bodies, and it can be as real and accurate as the real bodies moving and interacting. In the three-body problem, parameters can also be assigned specific values; however, the solution at these assigned values or a collection of such solutions does not reveal the mathematical structure of the problem. As in many other problems, the mathematical structure can be elucidated only by examining the differential equations themselves. Analytical mechanics aims at even more: not at understanding the mathematical structure of a single mechanical problem, but that of a class of problems so wide that they encompass most of mechanics. It concentrates on systems to which Lagrangian or Hamiltonian equations of motion are applicable and that include a very wide range of problems indeed. Development of analytical mechanics has two objectives: (i) increase the range of solvable problems by developing standard techniques with a wide range of applicability, and (ii) understand the mathematical structure of mechanics. In the long run, however, (ii) can help (i) more than a concentration on specific problems for which methods have already been designed. Intrinsic motion Generalized coordinates and constraints In Newtonian mechanics, one customarily uses all three Cartesian coordinates, or other 3D coordinate system, to refer to a body's position during its motion. In physical systems, however, some structure or other system usually constrains the body's motion from taking certain directions and pathways. So a full set of Cartesian coordinates is often unneeded, as the constraints determine the evolving relations among the coordinates, which relations can be modeled by equations corresponding to the constraints. In the Lagrangian and Hamiltonian formalisms, the constraints are incorporated into the motion's geometry, reducing the number of coordinates to the minimum needed to model the motion. These are known as generalized coordinates, denoted qi (i = 1, 2, 3...). Difference between curvillinear and generalized coordinates Generalized coordinates incorporate constraints on the system. There is one generalized coordinate qi for each degree of freedom (for convenience labelled by an index i = 1, 2...N), i.e. each way the system can change its configuration; as curvilinear lengths or angles of rotation. Generalized coordinates are not the same as curvilinear coordinates. The number of curvilinear coordinates equals the dimension of the position space in question (usually 3 for 3d space), while the number of generalized coordinates is not necessarily equal to this dimension; constraints can reduce the number of degrees of freedom (hence the number of generalized coordinates required to define the configuration of the system), following the general rule: For a system with N degrees of freedom, the generalized coordinates can be collected into an N-tuple: and the time derivative (here denoted by an overdot) of this tuple give the generalized velocities: D'Alembert's principle of virtual work D'Alembert's principle states that infinitesimal virtual work done by a force across reversible displacements is zero, which is the work done by a force consistent with ideal constraints of the system. The idea of a constraint is useful – since this limits what the system can do, and can provide steps to solving for the motion of the system. The equation for D'Alembert's principle is: where are the generalized forces (script Q instead of ordinary Q is used here to prevent conflict with canonical transformations below) and are the generalized coordinates. This leads to the generalized form of Newton's laws in the language of analytical mechanics: where T is the total kinetic energy of the system, and the notation is a useful shorthand (see matrix calculus for this notation). Constraints If the curvilinear coordinate system is defined by the standard position vector , and if the position vector can be written in terms of the generalized coordinates and time in the form: and this relation holds for all times , then are called holonomic constraints. Vector is explicitly dependent on in cases when the constraints vary with time, not just because of . For time-independent situations, the constraints are also called scleronomic, for time-dependent cases they are called rheonomic. Lagrangian mechanics The introduction of generalized coordinates and the fundamental Lagrangian function: where T is the total kinetic energy and V is the total potential energy of the entire system, then either following the calculus of variations or using the above formula – lead to the Euler–Lagrange equations; which are a set of N second-order ordinary differential equations, one for each qi(t). This formulation identifies the actual path followed by the motion as a selection of the path over which the time integral of kinetic energy is least, assuming the total energy to be fixed, and imposing no conditions on the time of transit. The Lagrangian formulation uses the configuration space of the system, the set of all possible generalized coordinates: where is N-dimensional real space (see also set-builder notation). The particular solution to the Euler–Lagrange equations is called a (configuration) path or trajectory, i.e. one particular q(t) subject to the required initial conditions. The general solutions form a set of possible configurations as functions of time: The configuration space can be defined more generally, and indeed more deeply, in terms of topological manifolds and the tangent bundle. Hamiltonian mechanics The Legendre transformation of the Lagrangian replaces the generalized coordinates and velocities (q, q̇) with (q, p); the generalized coordinates and the generalized momenta conjugate to the generalized coordinates: and introduces the Hamiltonian (which is in terms of generalized coordinates and momenta): where denotes the dot product, also leading to Hamilton's equations: which are now a set of 2N first-order ordinary differential equations, one for each qi(t) and pi(t). Another result from the Legendre transformation relates the time derivatives of the Lagrangian and Hamiltonian: which is often considered one of Hamilton's equations of motion additionally to the others. The generalized momenta can be written in terms of the generalized forces in the same way as Newton's second law: Analogous to the configuration space, the set of all momenta is the generalized momentum space: ("Momentum space" also refers to "k-space"; the set of all wave vectors (given by De Broglie relations) as used in quantum mechanics and theory of waves) The set of all positions and momenta form the phase space: that is, the Cartesian product of the configuration space and generalized momentum space. A particular solution to Hamilton's equations is called a phase path, a particular curve (q(t),p(t)) subject to the required initial conditions. The set of all phase paths, the general solution to the differential equations, is the phase portrait: The Poisson bracket All dynamical variables can be derived from position q, momentum p, and time t, and written as a function of these: A = A(q, p, t). If A(q, p, t) and B(q, p, t) are two scalar valued dynamical variables, the Poisson bracket is defined by the generalized coordinates and momenta: Calculating the total derivative of one of these, say A, and substituting Hamilton's equations into the result leads to the time evolution of A: This equation in A is closely related to the equation of motion in the Heisenberg picture of quantum mechanics, in which classical dynamical variables become quantum operators (indicated by hats (^)), and the Poisson bracket is replaced by the commutator of operators via Dirac's canonical quantization: Properties of the Lagrangian and the Hamiltonian Following are overlapping properties between the Lagrangian and Hamiltonian functions. All the individual generalized coordinates qi(t), velocities q̇i(t) and momenta pi(t) for every degree of freedom are mutually independent. Explicit time-dependence of a function means the function actually includes time t as a variable in addition to the q(t), p(t), not simply as a parameter through q(t) and p(t), which would mean explicit time-independence. The Lagrangian is invariant under addition of the total time derivative of any function of q and t, that is: so each Lagrangian L and L describe exactly the same motion. In other words, the Lagrangian of a system is not unique. Analogously, the Hamiltonian is invariant under addition of the partial time derivative of any function of q, p and t, that is: (K is a frequently used letter in this case). This property is used in canonical transformations (see below). If the Lagrangian is independent of some generalized coordinates, then the generalized momenta conjugate to those coordinates are constants of the motion, i.e. are conserved, this immediately follows from Lagrange's equations: Such coordinates are "cyclic" or "ignorable". It can be shown that the Hamiltonian is also cyclic in exactly the same generalized coordinates. If the Lagrangian is time-independent the Hamiltonian is also time-independent (i.e. both are constant in time). If the kinetic energy is a homogeneous function of degree 2 of the generalized velocities, and the Lagrangian is explicitly time-independent, then: where λ is a constant, then the Hamiltonian will be the total conserved energy, equal to the total kinetic and potential energies of the system: This is the basis for the Schrödinger equation, inserting quantum operators directly obtains it. Principle of least action Action is another quantity in analytical mechanics defined as a functional of the Lagrangian: A general way to find the equations of motion from the action is the principle of least action: where the departure t1 and arrival t2 times are fixed. The term "path" or "trajectory" refers to the time evolution of the system as a path through configuration space , in other words q(t) tracing out a path in . The path for which action is least is the path taken by the system. From this principle, all equations of motion in classical mechanics can be derived. This approach can be extended to fields rather than a system of particles (see below), and underlies the path integral formulation of quantum mechanics,Quantum Field Theory, D. McMahon, Mc Graw Hill (US), 2008, and is used for calculating geodesic motion in general relativity. Hamiltonian-Jacobi mechanics Canonical transformations The invariance of the Hamiltonian (under addition of the partial time derivative of an arbitrary function of p, q, and t) allows the Hamiltonian in one set of coordinates q and momenta p to be transformed into a new set Q = Q(q, p, t) and P = P(q, p, t), in four possible ways: With the restriction on P and Q such that the transformed Hamiltonian system is: the above transformations are called canonical transformations, each function Gn is called a generating function of the "nth kind" or "type-n". The transformation of coordinates and momenta can allow simplification for solving Hamilton's equations for a given problem. The choice of Q and P is completely arbitrary, but not every choice leads to a canonical transformation. One simple criterion for a transformation q → Q and p → P to be canonical is the Poisson bracket be unity, for all i = 1, 2,...N. If this does not hold then the transformation is not canonical. The Hamilton–Jacobi equation By setting the canonically transformed Hamiltonian K = 0, and the type-2 generating function equal to Hamilton's principal function (also the action ) plus an arbitrary constant C: the generalized momenta become: and P is constant, then the Hamiltonian-Jacobi equation (HJE) can be derived from the type-2 canonical transformation: where H is the Hamiltonian as before: Another related function is Hamilton's characteristic functionused to solve the HJE by additive separation of variables for a time-independent Hamiltonian H. The study of the solutions of the Hamilton–Jacobi equations leads naturally to the study of symplectic manifolds and symplectic topology. In this formulation, the solutions of the Hamilton–Jacobi equations are the integral curves of Hamiltonian vector fields. Routhian mechanics Routhian mechanics is a hybrid formulation of Lagrangian and Hamiltonian mechanics, not often used but especially useful for removing cyclic coordinates. If the Lagrangian of a system has s cyclic coordinates q = q1, q2, ... qs with conjugate momenta p = p1, p2, ... ps, with the rest of the coordinates non-cyclic and denoted ζ = ζ1, ζ1, ..., ζN − s, they can be removed by introducing the Routhian: which leads to a set of 2s Hamiltonian equations for the cyclic coordinates q, and N − s Lagrangian equations in the non cyclic coordinates ζ. Set up in this way, although the Routhian has the form of the Hamiltonian, it can be thought of a Lagrangian with N − s degrees of freedom. The coordinates q do not have to be cyclic, the partition between which coordinates enter the Hamiltonian equations and those which enter the Lagrangian equations is arbitrary. It is simply convenient to let the Hamiltonian equations remove the cyclic coordinates, leaving the non cyclic coordinates to the Lagrangian equations of motion. Appellian mechanics Appell's equation of motion involve generalized accelerations, the second time derivatives of the generalized coordinates: as well as generalized forces mentioned above in D'Alembert's principle. The equations are where is the acceleration of the k particle, the second time derivative of its position vector. Each acceleration ak is expressed in terms of the generalized accelerations αr, likewise each rk are expressed in terms the generalized coordinates qr. Classical field theory Lagrangian field theory Generalized coordinates apply to discrete particles. For N scalar fields φi(r, t) where i = 1, 2, ... N, the Lagrangian density is a function of these fields and their space and time derivatives, and possibly the space and time coordinates themselves: and the Euler–Lagrange equations have an analogue for fields: where ∂μ denotes the 4-gradient and the summation convention has been used. For N scalar fields, these Lagrangian field equations are a set of N second order partial differential equations in the fields, which in general will be coupled and nonlinear. This scalar field formulation can be extended to vector fields, tensor fields, and spinor fields. The Lagrangian is the volume integral of the Lagrangian density:Gravitation, J.A. Wheeler, C. Misner, K.S. Thorne, W.H. Freeman & Co, 1973, Originally developed for classical fields, the above formulation is applicable to all physical fields in classical, quantum, and relativistic situations: such as Newtonian gravity, classical electromagnetism, general relativity, and quantum field theory. It is a question of determining the correct Lagrangian density to generate the correct field equation. Hamiltonian field theory The corresponding "momentum" field densities conjugate to the N scalar fields φi(r, t) are: where in this context the overdot denotes a partial time derivative, not a total time derivative. The Hamiltonian density is defined by analogy with mechanics: The equations of motion are: where the variational derivative must be used instead of merely partial derivatives. For N fields, these Hamiltonian field equations are a set of 2N first order partial differential equations, which in general will be coupled and nonlinear. Again, the volume integral of the Hamiltonian density is the Hamiltonian Symmetry, conservation, and Noether's theorem Symmetry transformations in classical space and time Each transformation can be described by an operator (i.e. function acting on the position r or momentum p variables to change them). The following are the cases when the operator does not change r or p, i.e. symmetries. where R(n̂, θ) is the rotation matrix about an axis defined by the unit vector n̂''' and angle θ. Noether's theorem Noether's theorem states that a continuous symmetry transformation of the action corresponds to a conservation law, i.e. the action (and hence the Lagrangian) does not change under a transformation parameterized by a parameter s: the Lagrangian describes the same motion independent of s, which can be length, angle of rotation, or time. The corresponding momenta to q'' will be conserved. See also Lagrangian mechanics Hamiltonian mechanics Theoretical mechanics Classical mechanics Hamilton–Jacobi equation Hamilton's principle Kinematics Kinetics (physics) Non-autonomous mechanics Udwadia–Kalaba equation References and notes Mathematical physics Dynamical systems
0.780729
0.987635
0.771076
Convergent evolution
Convergent evolution is the independent evolution of similar features in species of different periods or epochs in time. Convergent evolution creates analogous structures that have similar form or function but were not present in the last common ancestor of those groups. The cladistic term for the same phenomenon is homoplasy. The recurrent evolution of flight is a classic example, as flying insects, birds, pterosaurs, and bats have independently evolved the useful capacity of flight. Functionally similar features that have arisen through convergent evolution are analogous, whereas homologous structures or traits have a common origin but can have dissimilar functions. Bird, bat, and pterosaur wings are analogous structures, but their forelimbs are homologous, sharing an ancestral state despite serving different functions. The opposite of convergence is divergent evolution, where related species evolve different traits. Convergent evolution is similar to parallel evolution, which occurs when two independent species evolve in the same direction and thus independently acquire similar characteristics; for instance, gliding frogs have evolved in parallel from multiple types of tree frog. Many instances of convergent evolution are known in plants, including the repeated development of C4 photosynthesis, seed dispersal by fleshy fruits adapted to be eaten by animals, and carnivory. Overview In morphology, analogous traits arise when different species live in similar ways and/or a similar environment, and so face the same environmental factors. When occupying similar ecological niches (that is, a distinctive way of life) similar problems can lead to similar solutions. The British anatomist Richard Owen was the first to identify the fundamental difference between analogies and homologies. In biochemistry, physical and chemical constraints on mechanisms have caused some active site arrangements such as the catalytic triad to evolve independently in separate enzyme superfamilies. In his 1989 book Wonderful Life, Stephen Jay Gould argued that if one could "rewind the tape of life [and] the same conditions were encountered again, evolution could take a very different course." Simon Conway Morris disputes this conclusion, arguing that convergence is a dominant force in evolution, and given that the same environmental and physical constraints are at work, life will inevitably evolve toward an "optimum" body plan, and at some point, evolution is bound to stumble upon intelligence, a trait presently identified with at least primates, corvids, and cetaceans. Distinctions Cladistics In cladistics, a homoplasy is a trait shared by two or more taxa for any reason other than that they share a common ancestry. Taxa which do share ancestry are part of the same clade; cladistics seeks to arrange them according to their degree of relatedness to describe their phylogeny. Homoplastic traits caused by convergence are therefore, from the point of view of cladistics, confounding factors which could lead to an incorrect analysis. Atavism In some cases, it is difficult to tell whether a trait has been lost and then re-evolved convergently, or whether a gene has simply been switched off and then re-enabled later. Such a re-emerged trait is called an atavism. From a mathematical standpoint, an unused gene (selectively neutral) has a steadily decreasing probability of retaining potential functionality over time. The time scale of this process varies greatly in different phylogenies; in mammals and birds, there is a reasonable probability of remaining in the genome in a potentially functional state for around 6 million years. Parallel vs. convergent evolution When two species are similar in a particular character, evolution is defined as parallel if the ancestors were also similar, and convergent if they were not. Some scientists have argued that there is a continuum between parallel and convergent evolution, while others maintain that despite some overlap, there are still important distinctions between the two. When the ancestral forms are unspecified or unknown, or the range of traits considered is not clearly specified, the distinction between parallel and convergent evolution becomes more subjective. For instance, the striking example of similar placental and marsupial forms is described by Richard Dawkins in The Blind Watchmaker as a case of convergent evolution, because mammals on each continent had a long evolutionary history prior to the extinction of the dinosaurs under which to accumulate relevant differences. At molecular level Proteins Protease active sites The enzymology of proteases provides some of the clearest examples of convergent evolution. These examples reflect the intrinsic chemical constraints on enzymes, leading evolution to converge on equivalent solutions independently and repeatedly. Serine and cysteine proteases use different amino acid functional groups (alcohol or thiol) as a nucleophile. In order to activate that nucleophile, they orient an acidic and a basic residue in a catalytic triad. The chemical and physical constraints on enzyme catalysis have caused identical triad arrangements to evolve independently more than 20 times in different enzyme superfamilies. Threonine proteases use the amino acid threonine as their catalytic nucleophile. Unlike cysteine and serine, threonine is a secondary alcohol (i.e. has a methyl group). The methyl group of threonine greatly restricts the possible orientations of triad and substrate, as the methyl clashes with either the enzyme backbone or the histidine base. Consequently, most threonine proteases use an N-terminal threonine in order to avoid such steric clashes. Several evolutionarily independent enzyme superfamilies with different protein folds use the N-terminal residue as a nucleophile. This commonality of active site but difference of protein fold indicates that the active site evolved convergently in those families. Cone snail and fish insulin Conus geographus produces a distinct form of insulin that is more similar to fish insulin protein sequences than to insulin from more closely related molluscs, suggesting convergent evolution, though with the possibility of horizontal gene transfer. Ferrous iron uptake via protein transporters in land plants and chlorophytes Distant homologues of the metal ion transporters ZIP in land plants and chlorophytes have converged in structure, likely to take up Fe2+ efficiently. The IRT1 proteins from Arabidopsis thaliana and rice have extremely different amino acid sequences from Chlamydomonass IRT1, but their three-dimensional structures are similar, suggesting convergent evolution. Na+,K+-ATPase and Insect resistance to cardiotonic steroids Many examples of convergent evolution exist in insects in terms of developing resistance at a molecular level to toxins. One well-characterized example is the evolution of resistance to cardiotonic steroids (CTSs) via amino acid substitutions at well-defined positions of the α-subunit of Na+,K+-ATPase (ATPalpha). Variation in ATPalpha has been surveyed in various CTS-adapted species spanning six insect orders. Among 21 CTS-adapted species, 58 (76%) of 76 amino acid substitutions at sites implicated in CTS resistance occur in parallel in at least two lineages. 30 of these substitutions (40%) occur at just two sites in the protein (positions 111 and 122). CTS-adapted species have also recurrently evolved neo-functionalized duplications of ATPalpha, with convergent tissue-specific expression patterns. Nucleic acids Convergence occurs at the level of DNA and the amino acid sequences produced by translating structural genes into proteins. Studies have found convergence in amino acid sequences in echolocating bats and the dolphin; among marine mammals; between giant and red pandas; and between the thylacine and canids. Convergence has also been detected in a type of non-coding DNA, cis-regulatory elements, such as in their rates of evolution; this could indicate either positive selection or relaxed purifying selection. In animal morphology Bodyplans Swimming animals including fish such as herrings, marine mammals such as dolphins, and ichthyosaurs (of the Mesozoic) all converged on the same streamlined shape. A similar shape and swimming adaptations are even present in molluscs, such as Phylliroe. The fusiform bodyshape (a tube tapered at both ends) adopted by many aquatic animals is an adaptation to enable them to travel at high speed in a high drag environment. Similar body shapes are found in the earless seals and the eared seals: they still have four legs, but these are strongly modified for swimming. The marsupial fauna of Australia and the placental mammals of the Old World have several strikingly similar forms, developed in two clades, isolated from each other. The body, and especially the skull shape, of the thylacine (Tasmanian tiger or Tasmanian wolf) converged with those of Canidae such as the red fox, Vulpes vulpes. Echolocation As a sensory adaptation, echolocation has evolved separately in cetaceans (dolphins and whales) and bats, but from the same genetic mutations. Electric fishes The Gymnotiformes of South America and the Mormyridae of Africa independently evolved passive electroreception (around 119 and 110 million years ago, respectively). Around 20 million years after acquiring that ability, both groups evolved active electrogenesis, producing weak electric fields to help them detect prey. Eyes One of the best-known examples of convergent evolution is the camera eye of cephalopods (such as squid and octopus), vertebrates (including mammals) and cnidaria (such as jellyfish). Their last common ancestor had at most a simple photoreceptive spot, but a range of processes led to the progressive refinement of camera eyes—with one sharp difference: the cephalopod eye is "wired" in the opposite direction, with blood and nerve vessels entering from the back of the retina, rather than the front as in vertebrates. As a result, vertebrates have a blind spot. Flight Birds and bats have homologous limbs because they are both ultimately derived from terrestrial tetrapods, but their flight mechanisms are only analogous, so their wings are examples of functional convergence. The two groups have independently evolved their own means of powered flight. Their wings differ substantially in construction. The bat wing is a membrane stretched across four extremely elongated fingers and the legs. The airfoil of the bird wing is made of feathers, strongly attached to the forearm (the ulna) and the highly fused bones of the wrist and hand (the carpometacarpus), with only tiny remnants of two fingers remaining, each anchoring a single feather. So, while the wings of bats and birds are functionally convergent, they are not anatomically convergent. Birds and bats also share a high concentration of cerebrosides in the skin of their wings. This improves skin flexibility, a trait useful for flying animals; other mammals have a far lower concentration. The extinct pterosaurs independently evolved wings from their fore- and hindlimbs, while insects have wings that evolved separately from different organs. Flying squirrels and sugar gliders are much alike in their body plans, with gliding wings stretched between their limbs, but flying squirrels are placental mammals while sugar gliders are marsupials, widely separated within the mammal lineage from the placentals. Hummingbird hawk-moths and hummingbirds have evolved similar flight and feeding patterns. Insect mouthparts Insect mouthparts show many examples of convergent evolution. The mouthparts of different insect groups consist of a set of homologous organs, specialised for the dietary intake of that insect group. Convergent evolution of many groups of insects led from original biting-chewing mouthparts to different, more specialised, derived function types. These include, for example, the proboscis of flower-visiting insects such as bees and flower beetles, or the biting-sucking mouthparts of blood-sucking insects such as fleas and mosquitos. Opposable thumbs Opposable thumbs allowing the grasping of objects are most often associated with primates, like humans and other apes, monkeys, and lemurs. Opposable thumbs also evolved in giant pandas, but these are completely different in structure, having six fingers including the thumb, which develops from a wrist bone entirely separately from other fingers. Primates Convergent evolution in humans includes blue eye colour and light skin colour. When humans migrated out of Africa, they moved to more northern latitudes with less intense sunlight. It was beneficial to them to reduce their skin pigmentation. It appears certain that there was some lightening of skin colour before European and East Asian lineages diverged, as there are some skin-lightening genetic differences that are common to both groups. However, after the lineages diverged and became genetically isolated, the skin of both groups lightened more, and that additional lightening was due to different genetic changes. Lemurs and humans are both primates. Ancestral primates had brown eyes, as most primates do today. The genetic basis of blue eyes in humans has been studied in detail and much is known about it. It is not the case that one gene locus is responsible, say with brown dominant to blue eye colour. However, a single locus is responsible for about 80% of the variation. In lemurs, the differences between blue and brown eyes are not completely known, but the same gene locus is not involved. In plants The annual life-cycle While most plant species are perennial, about 6% follow an annual life cycle, living for only one growing season. The annual life cycle independently emerged in over 120 plant families of angiosperms. The prevalence of annual species increases under hot-dry summer conditions in the four species-rich families of annuals (Asteraceae, Brassicaceae, Fabaceae, and Poaceae), indicating that the annual life cycle is adaptive. Carbon fixation C4 photosynthesis, one of the three major carbon-fixing biochemical processes, has arisen independently up to 40 times. About 7,600 plant species of angiosperms use carbon fixation, with many monocots including 46% of grasses such as maize and sugar cane, and dicots including several species in the Chenopodiaceae and the Amaranthaceae. Fruits Fruits with a wide variety of structural origins have converged to become edible. Apples are pomes with five carpels; their accessory tissues form the apple's core, surrounded by structures from outside the botanical fruit, the receptacle or hypanthium. Other edible fruits include other plant tissues; the fleshy part of a tomato is the walls of the pericarp. This implies convergent evolution under selective pressure, in this case the competition for seed dispersal by animals through consumption of fleshy fruits. Seed dispersal by ants (myrmecochory) has evolved independently more than 100 times, and is present in more than 11,000 plant species. It is one of the most dramatic examples of convergent evolution in biology. Carnivory Carnivory has evolved multiple times independently in plants in widely separated groups. In three species studied, Cephalotus follicularis, Nepenthes alata and Sarracenia purpurea, there has been convergence at the molecular level. Carnivorous plants secrete enzymes into the digestive fluid they produce. By studying phosphatase, glycoside hydrolase, glucanase, RNAse and chitinase enzymes as well as a pathogenesis-related protein and a thaumatin-related protein, the authors found many convergent amino acid substitutions. These changes were not at the enzymes' catalytic sites, but rather on the exposed surfaces of the proteins, where they might interact with other components of the cell or the digestive fluid. The authors also found that homologous genes in the non-carnivorous plant Arabidopsis thaliana tend to have their expression increased when the plant is stressed, leading the authors to suggest that stress-responsive proteins have often been co-opted in the repeated evolution of carnivory. Methods of inference Phylogenetic reconstruction and ancestral state reconstruction proceed by assuming that evolution has occurred without convergence. Convergent patterns may, however, appear at higher levels in a phylogenetic reconstruction, and are sometimes explicitly sought by investigators. The methods applied to infer convergent evolution depend on whether pattern-based or process-based convergence is expected. Pattern-based convergence is the broader term, for when two or more lineages independently evolve patterns of similar traits. Process-based convergence is when the convergence is due to similar forces of natural selection. Pattern-based measures Earlier methods for measuring convergence incorporate ratios of phenotypic and phylogenetic distance by simulating evolution with a Brownian motion model of trait evolution along a phylogeny. More recent methods also quantify the strength of convergence. One drawback to keep in mind is that these methods can confuse long-term stasis with convergence due to phenotypic similarities. Stasis occurs when there is little evolutionary change among taxa. Distance-based measures assess the degree of similarity between lineages over time. Frequency-based measures assess the number of lineages that have evolved in a particular trait space. Process-based measures Methods to infer process-based convergence fit models of selection to a phylogeny and continuous trait data to determine whether the same selective forces have acted upon lineages. This uses the Ornstein–Uhlenbeck process to test different scenarios of selection. Other methods rely on an a priori specification of where shifts in selection have occurred. See also : the presence of multiple alleles in ancestral populations might lead to the impression that convergent evolution has occurred. Iterative evolution – The repeated evolution of a specific trait or body plan from the same ancestral lineage at different points in time. Breeding back – A form of selective breeding to recreate the traits of an extinct species, but the genome will differ from the original species. Orthogenesis (contrastable with convergent evolution; involves teleology) Contingency (evolutionary biology) – effect of evolutionary history on outcomes Notes References Further reading External links Convergent evolution Evolutionary biology terminology
0.772364
0.998231
0.770998
Mass transfer
Mass transfer is the net movement of mass from one location (usually meaning stream, phase, fraction, or component) to another. Mass transfer occurs in many processes, such as absorption, evaporation, drying, precipitation, membrane filtration, and distillation. Mass transfer is used by different scientific disciplines for different processes and mechanisms. The phrase is commonly used in engineering for physical processes that involve diffusive and convective transport of chemical species within physical systems. Some common examples of mass transfer processes are the evaporation of water from a pond to the atmosphere, the purification of blood in the kidneys and liver, and the distillation of alcohol. In industrial processes, mass transfer operations include separation of chemical components in distillation columns, absorbers such as scrubbers or stripping, adsorbers such as activated carbon beds, and liquid-liquid extraction. Mass transfer is often coupled to additional transport processes, for instance in industrial cooling towers. These towers couple heat transfer to mass transfer by allowing hot water to flow in contact with air. The water is cooled by expelling some of its content in the form of water vapour. Astrophysics In astrophysics, mass transfer is the process by which matter gravitationally bound to a body, usually a star, fills its Roche lobe and becomes gravitationally bound to a second body, usually a compact object (white dwarf, neutron star or black hole), and is eventually accreted onto it. It is a common phenomenon in binary systems, and may play an important role in some types of supernovae and pulsars. Chemical engineering Mass transfer finds extensive application in chemical engineering problems. It is used in reaction engineering, separations engineering, heat transfer engineering, and many other sub-disciplines of chemical engineering like electrochemical engineering. The driving force for mass transfer is usually a difference in chemical potential, when it can be defined, though other thermodynamic gradients may couple to the flow of mass and drive it as well. A chemical species moves from areas of high chemical potential to areas of low chemical potential. Thus, the maximum theoretical extent of a given mass transfer is typically determined by the point at which the chemical potential is uniform. For single phase-systems, this usually translates to uniform concentration throughout the phase, while for multiphase systems chemical species will often prefer one phase over the others and reach a uniform chemical potential only when most of the chemical species has been absorbed into the preferred phase, as in liquid-liquid extraction. While thermodynamic equilibrium determines the theoretical extent of a given mass transfer operation, the actual rate of mass transfer will depend on additional factors including the flow patterns within the system and the diffusivities of the species in each phase. This rate can be quantified through the calculation and application of mass transfer coefficients for an overall process. These mass transfer coefficients are typically published in terms of dimensionless numbers, often including Péclet numbers, Reynolds numbers, Sherwood numbers, and Schmidt numbers, among others. Analogies between heat, mass, and momentum transfer There are notable similarities in the commonly used approximate differential equations for momentum, heat, and mass transfer. The molecular transfer equations of Newton's law for fluid momentum at low Reynolds number (Stokes flow), Fourier's law for heat, and Fick's law for mass are very similar, since they are all linear approximations to transport of conserved quantities in a flow field. At higher Reynolds number, the analogy between mass and heat transfer and momentum transfer becomes less useful due to the nonlinearity of the Navier-Stokes equation (or more fundamentally, the general momentum conservation equation), but the analogy between heat and mass transfer remains good. A great deal of effort has been devoted to developing analogies among these three transport processes so as to allow prediction of one from any of the others. References See also Crystal growth Heat transfer Fick's laws of diffusion Distillation column McCabe-Thiele method Vapor-Liquid Equilibrium Liquid-liquid extraction Separation process Binary star Type Ia supernova Thermodiffusion Accretion (astrophysics) Transport phenomena Mechanical engineering Heating, ventilation, and air conditioning
0.781571
0.986451
0.770982
Rigour
Rigour (British English) or rigor (American English; see spelling differences) describes a condition of stiffness or strictness. These constraints may be environmentally imposed, such as "the rigours of famine"; logically imposed, such as mathematical proofs which must maintain consistent answers; or socially imposed, such as the process of defining ethics and law. Etymology "Rigour" comes to English through old French (13th c., Modern French rigueur) meaning "stiffness", which itself is based on the Latin rigorem (nominative rigor) "numbness, stiffness, hardness, firmness; roughness, rudeness", from the verb rigere "to be stiff". The noun was frequently used to describe a condition of strictness or stiffness, which arises from a situation or constraint either chosen or experienced passively. For example, the title of the book Theologia Moralis Inter Rigorem et Laxitatem Medi roughly translates as "mediating theological morality between rigour and laxness". The book details, for the clergy, situations in which they are obligated to follow church law exactly, and in which situations they can be more forgiving yet still considered moral. Rigor mortis translates directly as the stiffness (rigor) of death (mortis), again describing a condition which arises from a certain constraint (death). Intellectualism Intellectual rigour is a process of thought which is consistent, does not contain self-contradiction, and takes into account the entire scope of available knowledge on the topic. It actively avoids logical fallacy. Furthermore, it requires a sceptical assessment of the available knowledge. If a topic or case is dealt with in a rigorous way, it typically means that it is dealt with in a comprehensive, thorough and complete way, leaving no room for inconsistencies. Scholarly method describes the different approaches or methods which may be taken to apply intellectual rigour on an institutional level to ensure the quality of information published. An example of intellectual rigour assisted by a methodical approach is the scientific method, in which a person will produce a hypothesis based on what they believe to be true, then construct experiments in order to prove that hypothesis wrong. This method, when followed correctly, helps to prevent against circular reasoning and other fallacies which frequently plague conclusions within academia. Other disciplines, such as philosophy and mathematics, employ their own structures to ensure intellectual rigour. Each method requires close attention to criteria for logical consistency, as well as to all relevant evidence and possible differences of interpretation. At an institutional level, peer review is used to validate intellectual rigour. Honesty Intellectual rigour is a subset of intellectual honesty—a practice of thought in which ones convictions are kept in proportion to valid evidence. Intellectual honesty is an unbiased approach to the acquisition, analysis, and transmission of ideas. A person is being intellectually honest when he or she, knowing the truth, states that truth, regardless of outside social/environmental pressures. It is possible to doubt whether complete intellectual honesty exists—on the grounds that no one can entirely master his or her own presuppositions—without doubting that certain kinds of intellectual rigour are potentially available. The distinction certainly matters greatly in debate, if one wishes to say that an argument is flawed in its premises. Politics and law The setting for intellectual rigour does tend to assume a principled position from which to advance or argue. An opportunistic tendency to use any argument at hand is not very rigorous, although very common in politics, for example. Arguing one way one day, and another later, can be defended by casuistry, i.e. by saying the cases are different. In the legal context, for practical purposes, the facts of cases do always differ. Case law can therefore be at odds with a principled approach; and intellectual rigour can seem to be defeated. This defines a judge's problem with uncodified law. Codified law poses a different problem, of interpretation and adaptation of definite principles without losing the point; here applying the letter of the law, with all due rigour, may on occasion seem to undermine the principled approach. Mathematics Mathematical rigour can apply to methods of mathematical proof and to methods of mathematical practice (thus relating to other interpretations of rigour). Mathematical proof Mathematical rigour is often cited as a kind of gold standard for mathematical proof. Its history traces back to Greek mathematics, especially to Euclid's Elements. Until the 19th century, Euclid's Elements was seen as extremely rigorous and profound, but in the late 19th century, Hilbert (among others) realized that the work left certain assumptions implicit—assumptions that could not be proved from Euclid's Axioms (e.g. two circles can intersect in a point, some point is within an angle, and figures can be superimposed on each other). This was contrary to the idea of rigorous proof where all assumptions need to be stated and nothing can be left implicit. New foundations were developed using the axiomatic method to address this gap in rigour found in the Elements (e.g., Hilbert's axioms, Birkhoff's axioms, Tarski's axioms). During the 19th century, the term "rigorous" began to be used to describe increasing levels of abstraction when dealing with calculus which eventually became known as mathematical analysis. The works of Cauchy added rigour to the older works of Euler and Gauss. The works of Riemann added rigour to the works of Cauchy. The works of Weierstrass added rigour to the works of Riemann, eventually culminating in the arithmetization of analysis. Starting in the 1870s, the term gradually came to be associated with Cantorian set theory. Mathematical rigour can be modelled as amenability to algorithmic proof checking. Indeed, with the aid of computers, it is possible to check some proofs mechanically. Formal rigour is the introduction of high degrees of completeness by means of a formal language where such proofs can be codified using set theories such as ZFC (see automated theorem proving). Published mathematical arguments have to conform to a standard of rigour, but are written in a mixture of symbolic and natural language. In this sense, written mathematical discourse is a prototype of formal proof. Often, a written proof is accepted as rigorous although it might not be formalised as yet. The reason often cited by mathematicians for writing informally is that completely formal proofs tend to be longer and more unwieldy, thereby obscuring the line of argument. An argument that appears obvious to human intuition may in fact require fairly long formal derivations from the axioms. A particularly well-known example is how in Principia Mathematica, Whitehead and Russell have to expend a number of lines of rather opaque effort in order to establish that, indeed, it is sensical to say: "1+1=2". In short, comprehensibility is favoured over formality in written discourse. Still, advocates of automated theorem provers may argue that the formalisation of proof does improve the mathematical rigour by disclosing gaps or flaws in informal written discourse. When the correctness of a proof is disputed, formalisation is a way to settle such a dispute as it helps to reduce misinterpretations or ambiguity. Physics The role of mathematical rigour in relation to physics is twofold: First, there is the general question, sometimes called Wigner's Puzzle, "how it is that mathematics, quite generally, is applicable to nature?" Some scientists believe that its record of successful application to nature justifies the study of mathematical physics. Second, there is the question regarding the role and status of mathematically rigorous results and relations. This question is particularly vexing in relation to quantum field theory, where computations often produce infinite values for which a variety of non-rigorous work-arounds have been devised. Both aspects of mathematical rigour in physics have attracted considerable attention in philosophy of science (see, for example, ref. and ref. and the works quoted therein). Education Rigour in the classroom is a hotly debated topic amongst educators. Even the semantic meaning of the word is contested. Generally speaking, classroom rigour consists of multi-faceted, challenging instruction and correct placement of the student. Students excelling in formal operational thought tend to excel in classes for gifted students. Students who have not reached that final stage of cognitive development, according to developmental psychologist Jean Piaget, can build upon those skills with the help of a properly trained teacher. Rigour in the classroom is commonly called "rigorous instruction". It is instruction that requires students to construct meaning for themselves, impose structure on information, integrate individual skills into processes, operate within but at the outer edge of their abilities, and apply what they learn in more than one context and to unpredictable situations See also Intellectual honesty Intellectual dishonesty Pedant Scientific method Self-deception Sophistry Cognitive rigor References Philosophical logic
0.779525
0.989019
0.770965
Exothermic process
In thermodynamics, an exothermic process is a thermodynamic process or reaction that releases energy from the system to its surroundings, usually in the form of heat, but also in a form of light (e.g. a spark, flame, or flash), electricity (e.g. a battery), or sound (e.g. explosion heard when burning hydrogen). The term exothermic was first coined by 19th-century French chemist Marcellin Berthelot. The opposite of an exothermic process is an endothermic process, one that absorbs energy, usually in the form of heat. The concept is frequently applied in the physical sciences to chemical reactions where chemical bond energy is converted to thermal energy (heat). Two types of chemical reactions Exothermic and endothermic describe two types of chemical reactions or systems found in nature, as follows: Exothermic An exothermic reaction occurs when heat is released to the surroundings. According to the IUPAC, an exothermic reaction is "a reaction for which the overall standard enthalpy change ΔH⚬ is negative". Some examples of exothermic process are fuel combustion, condensation and nuclear fission, which is used in nuclear power plants to release large amounts of energy. Endothermic In an endothermic reaction or system, energy is taken from the surroundings in the course of the reaction, usually driven by a favorable entropy increase in the system. An example of an endothermic reaction is a first aid cold pack, in which the reaction of two chemicals, or dissolving of one in another, requires calories from the surroundings, and the reaction cools the pouch and surroundings by absorbing heat from them. Photosynthesis, the process that allows plants to convert carbon dioxide and water to sugar and oxygen, is an endothermic process: plants absorb radiant energy from the sun and use it in an endothermic, otherwise non-spontaneous process. The chemical energy stored can be freed by the inverse (spontaneous) process: combustion of sugar, which gives carbon dioxide, water and heat (radiant energy). Energy release Exothermic refers to a transformation in which a closed system releases energy (heat) to the surroundings, expressed by When the transformation occurs at constant pressure and without exchange of electrical energy, heat is equal to the enthalpy change, i.e. while at constant volume, according to the first law of thermodynamics it equals internal energy change, i.e. In an adiabatic system (i.e. a system that does not exchange heat with the surroundings), an otherwise exothermic process results in an increase in temperature of the system. In exothermic chemical reactions, the heat that is released by the reaction takes the form of electromagnetic energy or kinetic energy of molecules. The transition of electrons from one quantum energy level to another causes light to be released. This light is equivalent in energy to some of the stabilization energy of the energy for the chemical reaction, i.e. the bond energy. This light that is released can be absorbed by other molecules in solution to give rise to molecular translations and rotations, which gives rise to the classical understanding of heat. In an exothermic reaction, the activation energy (energy needed to start the reaction) is less than the energy that is subsequently released, so there is a net release of energy. Examples Some examples of exothermic processes are: Combustion of fuels such as wood, coal and oil/petroleum The thermite reaction The reaction of alkali metals and other highly electropositive metals with water Condensation of rain from water vapor Mixing water and strong acids or strong bases The reaction of acids and bases Dehydration of carbohydrates by sulfuric acid The setting of cement and concrete Some polymerization reactions such as the setting of epoxy resin The reaction of most metals with halogens or oxygen Nuclear fusion in hydrogen bombs and in stellar cores (to iron) Nuclear fission of heavy elements The reaction between zinc and hydrochloric acid Respiration (breaking down of glucose to release energy in cells) Implications for chemical reactions Chemical exothermic reactions are generally more spontaneous than their counterparts, endothermic reactions. In a thermochemical reaction that is exothermic, the heat may be listed among the products of the reaction. See also Calorimetry Chemical thermodynamics Differential scanning calorimetry Endergonic Endergonic reaction Exergonic Exergonic reaction Endothermic reaction References External links Observe exothermic reactions in a simple experiment Thermodynamic processes Chemical thermodynamics da:Exoterm
0.779584
0.988935
0.770958
Drug metabolism
Drug metabolism is the metabolic breakdown of drugs by living organisms, usually through specialized enzymatic systems. More generally, xenobiotic metabolism (from the Greek xenos "stranger" and biotic "related to living beings") is the set of metabolic pathways that modify the chemical structure of xenobiotics, which are compounds foreign to an organism's normal biochemistry, such as any drug or poison. These pathways are a form of biotransformation present in all major groups of organisms and are considered to be of ancient origin. These reactions often act to detoxify poisonous compounds (although in some cases the intermediates in xenobiotic metabolism can themselves cause toxic effects). The study of drug metabolism is the object of pharmacokinetics. Metabolism is one of the stages (see ADME) of the drug's transit through the body that involves the breakdown of the drug so that it can be excreted by the body. The metabolism of pharmaceutical drugs is an important aspect of pharmacology and medicine. For example, the rate of metabolism determines the duration and intensity of a drug's pharmacologic action. Drug metabolism also affects multidrug resistance in infectious diseases and in chemotherapy for cancer, and the actions of some drugs as substrates or inhibitors of enzymes involved in xenobiotic metabolism are a common reason for hazardous drug interactions. These pathways are also important in environmental science, with the xenobiotic metabolism of microorganisms determining whether a pollutant will be broken down during bioremediation, or persist in the environment. The enzymes of xenobiotic metabolism, particularly the glutathione S-transferases are also important in agriculture, since they may produce resistance to pesticides and herbicides. Drug metabolism is divided into three phases. In phase I, enzymes such as cytochrome P450 oxidases introduce reactive or polar groups into xenobiotics. These modified compounds are then conjugated to polar compounds in phase II reactions. These reactions are catalysed by transferase enzymes such as glutathione S-transferases. Finally, in phase III, the conjugated xenobiotics may be further processed, before being recognised by efflux transporters and pumped out of cells. Drug metabolism often converts lipophilic compounds into hydrophilic products that are more readily excreted. Permeability barriers and detoxification The exact compounds an organism is exposed to will be largely unpredictable, and may differ widely over time; these are major characteristics of xenobiotic toxic stress. The major challenge faced by xenobiotic detoxification systems is that they must be able to remove the almost-limitless number of xenobiotic compounds from the complex mixture of chemicals involved in normal metabolism. The solution that has evolved to address this problem is an elegant combination of physical barriers and low-specificity enzymatic systems. All organisms use cell membranes as hydrophobic permeability barriers to control access to their internal environment. Polar compounds cannot diffuse across these cell membranes, and the uptake of useful molecules is mediated through transport proteins that specifically select substrates from the extracellular mixture. This selective uptake means that most hydrophilic molecules cannot enter cells, since they are not recognised by any specific transporters. In contrast, the diffusion of hydrophobic compounds across these barriers cannot be controlled, and organisms, therefore, cannot exclude lipid-soluble xenobiotics using membrane barriers. However, the existence of a permeability barrier means that organisms were able to evolve detoxification systems that exploit the hydrophobicity common to membrane-permeable xenobiotics. These systems therefore solve the specificity problem by possessing such broad substrate specificities that they metabolise almost any non-polar compound. Useful metabolites are excluded since they are polar, and in general contain one or more charged groups. The detoxification of the reactive by-products of normal metabolism cannot be achieved by the systems outlined above, because these species are derived from normal cellular constituents and usually share their polar characteristics. However, since these compounds are few in number, specific enzymes can recognize and remove them. Examples of these specific detoxification systems are the glyoxalase system, which removes the reactive aldehyde methylglyoxal, and the various antioxidant systems that eliminate reactive oxygen species. Phases of detoxification The metabolism of xenobiotics is often divided into three phases: modification, conjugation, and excretion. These reactions act in concert to detoxify xenobiotics and remove them from cells. Phase I – modification In phase I, a variety of enzymes act to introduce reactive and polar groups into their substrates. One of the most common modifications is hydroxylation catalysed by the cytochrome P-450-dependent mixed-function oxidase system. These enzyme complexes act to incorporate an atom of oxygen into nonactivated hydrocarbons, which can result in either the introduction of hydroxyl groups or N-, O- and S-dealkylation of substrates. The reaction mechanism of the P-450 oxidases proceeds through the reduction of cytochrome-bound oxygen and the generation of a highly-reactive oxyferryl species, according to the following scheme: O2 + NADPH + H+ + RH → NADP+ + H2O + ROH Phase I reactions (also termed nonsynthetic reactions) may occur by oxidation, reduction, hydrolysis, cyclization, decyclization, and addition of oxygen or removal of hydrogen, carried out by mixed function oxidases, often in the liver. These oxidative reactions typically involve a cytochrome P450 monooxygenase (often abbreviated CYP), NADPH and oxygen. The classes of pharmaceutical drugs that utilize this method for their metabolism include phenothiazines, paracetamol, and steroids. If the metabolites of phase I reactions are sufficiently polar, they may be readily excreted at this point. However, many phase I products are not eliminated rapidly and undergo a subsequent reaction in which an endogenous substrate combines with the newly incorporated functional group to form a highly polar conjugate. A common Phase I oxidation involves conversion of a C-H bond to a C-OH. This reaction sometimes converts a pharmacologically inactive compound (a prodrug) to a pharmacologically active one. By the same token, Phase I can turn a nontoxic molecule into a poisonous one (toxification). Simple hydrolysis in the stomach is normally an innocuous reaction, however there are exceptions. For example, phase I metabolism converts acetonitrile to HOCH2CN, which rapidly dissociates into formaldehyde and hydrogen cyanide. Phase I metabolism of drug candidates can be simulated in the laboratory using non-enzyme catalysts. This example of a biomimetic reaction tends to give products that often contains the Phase I metabolites. As an example, the major metabolite of the pharmaceutical trimebutine, desmethyltrimebutine (nor-trimebutine), can be efficiently produced by in vitro oxidation of the commercially available drug. Hydroxylation of an N-methyl group leads to expulsion of a molecule of formaldehyde, while oxidation of the O-methyl groups takes place to a lesser extent. Oxidation Cytochrome P450 monooxygenase system Flavin-containing monooxygenase system Alcohol dehydrogenase and aldehyde dehydrogenase Monoamine oxidase Co-oxidation by peroxidases Reduction NADPH-cytochrome P450 reductase Cytochrome P450 reductase, also known as NADPH:ferrihemoprotein oxidoreductase, NADPH:hemoprotein oxidoreductase, NADPH:P450 oxidoreductase, P450 reductase, POR, CPR, CYPOR, is a membrane-bound enzyme required for electron transfer to cytochrome P450 in the microsome of the eukaryotic cell from a FAD- and FMN-containing enzyme NADPH:cytochrome P450 reductase The general scheme of electron flow in the POR/P450 system is: NADPH → FAD → FMN → P450 → O2 Reduced (ferrous) cytochrome P450 During reduction reactions, a chemical can enter futile cycling, in which it gains a free-radical electron, then promptly loses it to oxygen (to form a superoxide anion). Hydrolysis Esterases and amidase Epoxide hydrolase Phase II – conjugation In subsequent phase II reactions, these activated xenobiotic metabolites are conjugated with charged species such as glutathione (GSH), sulfate, glycine, or glucuronic acid. Sites on drugs where conjugation reactions occur include carboxy (-COOH), hydroxy (-OH), amino (NH2), and thiol (-SH) groups. Products of conjugation reactions have increased molecular weight and tend to be less active than their substrates, unlike Phase I reactions which often produce active metabolites. The addition of large anionic groups (such as GSH) detoxifies reactive electrophiles and produces more polar metabolites that cannot diffuse across membranes, and may, therefore, be actively transported. These reactions are catalysed by a large group of broad-specificity transferases, which in combination can metabolise almost any hydrophobic compound that contains nucleophilic or electrophilic groups. One of the most important classes of this group is that of the glutathione S-transferases (GSTs). Phase III – further modification and excretion After phase II reactions, the xenobiotic conjugates may be further metabolized. A common example is the processing of glutathione conjugates to acetylcysteine (mercapturic acid) conjugates. Here, the γ-glutamate and glycine residues in the glutathione molecule are removed by gamma-glutamyl transpeptidase and dipeptidases. In the final step, the cysteine residue in the conjugate is acetylated. Conjugates and their metabolites can be excreted from cells in phase III of their metabolism, with the anionic groups acting as affinity tags for a variety of membrane transporters of the multidrug resistance protein (MRP) family. These proteins are members of the family of ATP-binding cassette transporters and can catalyse the ATP-dependent transport of a huge variety of hydrophobic anions, and thus act to remove phase II products to the extracellular medium, where they may be further metabolized or excreted. Endogenous toxins The detoxification of endogenous reactive metabolites such as peroxides and reactive aldehydes often cannot be achieved by the system described above. This is the result of these species' being derived from normal cellular constituents and usually sharing their polar characteristics. However, since these compounds are few in number, it is possible for enzymatic systems to utilize specific molecular recognition to recognize and remove them. The similarity of these molecules to useful metabolites therefore means that different detoxification enzymes are usually required for the metabolism of each group of endogenous toxins. Examples of these specific detoxification systems are the glyoxalase system, which acts to dispose of the reactive aldehyde methylglyoxal, and the various antioxidant systems that remove reactive oxygen species. Sites Quantitatively, the smooth endoplasmic reticulum of the liver cell is the principal organ of drug metabolism, although every biological tissue has some ability to metabolize drugs. Factors responsible for the liver's contribution to drug metabolism include that it is a large organ, that it is the first organ perfused by chemicals absorbed in the gut, and that there are very high concentrations of most drug-metabolizing enzyme systems relative to other organs. If a drug is taken into the GI tract, where it enters hepatic circulation through the portal vein, it becomes well-metabolized and is said to show the first pass effect. Other sites of drug metabolism include epithelial cells of the gastrointestinal tract, lungs, kidneys, and the skin. These sites are usually responsible for localized toxicity reactions. Factors affecting drug metabolism The duration and intensity of pharmacological action of most lipophilic drugs are determined by the rate they are metabolized to inactive products. The Cytochrome P450 monooxygenase system is a crucial pathway in this regard. In general, anything that increases the rate of metabolism (e.g., enzyme induction) of a pharmacologically active metabolite will decrease the duration and intensity of the drug action. The opposite is also true, as in enzyme inhibition. However, in cases where an enzyme is responsible for metabolizing a pro-drug into a drug, enzyme induction can accelerate this conversion and increase drug levels, potentially causing toxicity. Various physiological and pathological factors can also affect drug metabolism. Physiological factors that can influence drug metabolism include age, individual variation (e.g., pharmacogenetics), enterohepatic circulation, nutrition, sex differences or gut microbiota. This last factor has significance because gut microorganisms are able to chemically modify the structure of drugs through degradation and biotransformation processes, thus altering the activity and toxicity of drugs. These processes can decrease the efficacy of drugs, as is the case of digoxin in the presence of Eggerthella lenta in the microbiota. Genetic variation (polymorphism) accounts for some of the variability in the effect of drugs. In general, drugs are metabolized more slowly in fetal, neonatal and elderly humans and animals than in adults. Inherited genetic variations in drug metabolising enzymes result in their different catalytic activity levels. For example, N-acetyltransferases (involved in Phase II reactions), individual variation creates a group of people who acetylate slowly (slow acetylators) and those who acetylate quickly (rapid acetylators), split roughly 50:50 in the population of Canada. However, variability in NAT2 alleles distribution across different populations is high and some ethnicities have higher proportion of slow acetylators. This variation in metabolising capacity may have dramatic consequences, as the slow acetylators are more prone to dose-dependent toxicity. NAT2 enzyme is a primary metaboliser of antituberculosis (isoniazid), some antihypertensive (hydralazine), anti-arrythmic drugs (procainamide), antidepressants (phenelzine) and many more and increased toxicity as well as drug adverse reactions in slow acetylators have been widely reported. Similar phenomenons of altered metabolism due to inherited variations have been described for other drug-metabolising enzymes, like CYP2D6, CYP3A4, DPYD, UGT1A1. DPYD and UGT1A1 genotyping is now required before administration of the corresponding substrate compounds (5-FU and capecitabine for DPYD and irinotecan for UGT1A1) to determine the activity of DPYD and UGT1A1 enzyme and reduce the dose of the drug in order to avoid severe adverse reactions. Dose, frequency, route of administration, tissue distribution and protein binding of the drug affect its metabolism. Pathological factors can also influence drug metabolism, including liver, kidney, or heart diseases. In silico modelling and simulation methods allow drug metabolism to be predicted in virtual patient populations prior to performing clinical studies in human subjects. This can be used to identify individuals most at risk from adverse reaction. History Studies on how people transform the substances that they ingest began in the mid-nineteenth century, with chemists discovering that organic chemicals such as benzaldehyde could be oxidized and conjugated to amino acids in the human body. During the remainder of the nineteenth century, several other basic detoxification reactions were discovered, such as methylation, acetylation, and sulfonation. In the early twentieth century, work moved on to the investigation of the enzymes and pathways that were responsible for the production of these metabolites. This field became defined as a separate area of study with the publication by Richard Williams of the book Detoxication mechanisms in 1947. This modern biochemical research resulted in the identification of glutathione S-transferases in 1961, followed by the discovery of cytochrome P450s in 1962, and the realization of their central role in xenobiotic metabolism in 1963. See also Biodegradation Microbial biodegradation References Further reading External links Databases Drug metabolism database Directory of P450-containing Systems University of Minnesota Biocatalysis/Biodegradation Database SPORCalc Drug metabolism Small Molecule Drug Metabolism Drug metabolism portal Microbial biodegradation Microbial Biodegradation, Bioremediation and Biotransformation History Metabolism Hepatology Toxicology Pharmacokinetics Biodegradation
0.775937
0.993577
0.770953
Structuralism (psychology)
Structuralism in psychology (also structural psychology) is a theory of consciousness developed by Edward Bradford Titchener. This theory was challenged in the 20th century. Structuralists seek to analyze the adult mind (the total sum of experience from birth to the present) in terms of the simplest definable components of experience and then to find how these components fit together to form more complex experiences as well as how they correlate to physical events. To do this, structuralists employ introspection: self-reports of sensations, views, feelings, and emotions. Titchener Edward B. Titchener is credited for the theory of structuralism. It is considered to be the first "school" of psychology. Because he was a student of Wilhelm Wundt at the University of Leipzig, Titchener's ideas on how the mind worked were heavily influenced by Wundt's theory of voluntarism and his ideas of association and apperception (the passive and active combinations of elements of consciousness respectively). Titchener attempted to classify the structures of the mind in a similar way to how chemists classify the elements of nature, into the nature. Titchener said that only observable events constituted that science and that any speculation concerning unobservable events has no place in society (this view was similar to the one expressed by Ernst Mach). In his book, Systematic Psychology, Titchener wrote: Mind and consciousness Titchener believed the mind was the accumulated experience of a lifetime. He believed that he could understand reasoning and the structure of the mind if he could define and categorize the basic components of mind and the rules by which the components interacted. Introspection The main tool Titchener used to try to determine the different components of consciousness was introspection. Titchener writes in his Systematic Psychology: The state of consciousness which is to be the matter of psychology ... can become an object of immediate knowledge only by way of introspection or self-awareness. and in his book An Outline of Psychology: ...within the sphere of psychology, introspection is the final and only court of appeal, that psychological evidence cannot be other than introspective evidence. Titchener had very strict guidelines for the reporting of an introspective analysis. The subject would be presented with an object, such as a pencil. The subject would then report the characteristics of that pencil (e.g., color and length). The subject would be instructed not to report the name of the object (pencil) because that did not describe the raw data of what the subject was experiencing. Titchener referred to this as stimulus error. In his translation of Wundt's work, Titchener illustrates Wundt as a supporter of introspection as a method through which to observe consciousness. However, introspection fits Wundt's theories only if the term is taken to refer to psychophysical methods. Introspection literally means 'looking within', to try to describe a person's memory, perceptions, cognitive processes, and/or motivations. Elements of the mind Structuralists believe that our consciousness is composed of individual parts which contribute to overall structure and function of the mind. Titchener's theory began with the question of what each element of the mind is. He concluded from his research that there were three types of mental elements constituting conscious experience: Sensations (elements of perceptions), Images (elements of ideas), and affections (elements of emotions). These elements could be broken down into their respective properties, which he determined were quality, intensity, duration, clearness, and extensity. Both sensations and images contained all of these qualities; however, affections were lacking in both clearness and extensity. And images and affections could be broken down further into just clusters of sensations. Therefore, by following this train of thinking all thoughts were images, which being constructed from elementary sensations meant that all complex reasoning and thought could eventually be broken down into just the sensations which he could get at through introspection. Interaction of elements The second issue in Titchener's theory of structuralism was the question of how the mental elements combined and interacted with each other to form conscious experience. His conclusions were largely based on ideas of associationism. In particular, Titchener focuses on the law of contiguity, which is the idea that the thought of something will tend to cause thoughts of things that are usually experienced along with it. Titchener rejected Wundt's notions of apperception and creative synthesis (voluntary action), which were the basis of Wundt's voluntarism. Titchener argued that attention was simply a manifestation of the "clearness" property within sensation. Physical and mental relationship Once Titchener identified the elements of mind and their interaction, his theory then asked the question of why the elements interact in the way they do. In particular, Titchener was interested in the relationship between the conscious experience and the physical processes. Titchener believed that the physical processes provide a continuous substratum that gives psychological processes a continuity they otherwise would not have. Therefore, the nervous system does not cause conscious experience, but can be used to explain some characteristics of mental events. Wundt and structuralism Wilhelm Wundt instructed Titchener, the founder of structuralism, at the University of Leipzig. The 'science of immediate experience' was stated by him. This simply means that the complex perceptions can be raised through basic sensory information. Wundt is often associated in past literature with structuralism and the use of similar introspective methods. Wundt makes a clear distinction between pure introspection, which is the relatively unstructured self-observation used by earlier philosophers, and experimental introspection. Wundt believes this type of introspection to be acceptable since it uses laboratory instruments to vary conditions and make results of internal perceptions more precise. The reason for this confusion lies in the translation of Wundt's writings. When Titchener brought his theory to America, he also brought with him Wundt's work. Titchener translated these works for the American audience, and in so doing misinterpreted Wundt's meaning. He then used this translation to show that Wundt supported his own theories. In fact, Wundt's main theory was that of psychological voluntarism (psychologischer Voluntarismus), the doctrine that the power of the will organizes the mind's content into higher-level thought processes. Criticisms Structuralism has faced a large amount of criticism, particularly from functionalism, the school of psychology which later evolved into the psychology of pragmatism (reconvening introspection into acceptable practices of observation). The main critique of structuralism was its focus on introspection as the method by which to gain an understanding of conscious experience. Critics argue that self-analysis was not feasible, since introspective students cannot appreciate the processes or mechanisms of their own mental processes. Introspection, therefore, yielded different results depending on who was using it and what they were seeking. Some critics also pointed out that introspective techniques actually resulted in retrospection – the memory of a sensation rather than the sensation itself. Behaviorists, specifically methodological behaviorists, fully rejected even the idea of the conscious experience as a worthy topic in psychology, since they believed that the subject matter of scientific psychology should be strictly operationalized in an objective and measurable way. Because the notion of a mind could not be objectively measured, it was not worth further inquiry. However, radical behaviorism includes thinking, feeling, and private events in its theory and analysis of psychology. Structuralism also believes that the mind could be dissected into its individual parts, which then formed conscious experience. This also received criticism from the Gestalt school of psychology, which argues that the mind cannot be broken down into individual elements. Besides theoretical attacks, structuralism was criticized for excluding and ignoring important developments happening outside of structuralism. For instance, structuralism did not concern itself with the study of animal behavior, and personality. Titchener himself was criticized for not using his psychology to help answer practical problems. Instead, Titchener was interested in seeking pure knowledge that to him was more important than commonplace issues. Alternatives One alternative theory to structuralism, to which Titchener took offense, was functionalism (functional psychology). Functionalism was developed by William James in contrast to structuralism. It stressed the importance of empirical, rational thought over an experimental, trial-and-error philosophy. James in his theory included introspection (i.e., the psychologist's study of his own states of mind), but also included things like analysis (i.e., the logical criticism of precursor and contemporary views of the mind), experiment (e.g., in hypnosis or neurology), and comparison (i.e., the use of statistical means to distinguish norms from anomalies) which gave it somewhat of an edge. Functionalism also differed in that it focused on the how useful certain processes were in the brain to the environment you were in and not the processes and other detail like in structuralism. Contemporary structuralism Researchers are still working to offer objective experimental approaches to measuring conscious experience, in particular within the field of cognitive psychology, which is in some ways carrying on the torch of Titchener's ideas. It is working on the same type of issues such as sensations and perceptions. Today, any introspective methodologies are done under highly controlled situations and are understood to be subjective and retrospective. Proponents argue that psychology can still gain useful information from using introspection in this case. See also Association of ideas Associationism Mentalism (psychology) History of psychology Nature Notes References Danziger, Kurt. "Wundt and the Two Traditions in Psychology." In Wilhelm Wundt and the Making of a Scientific Psychology, by R. W. Rieber, 73-88. New York, NY: Plenum Press, 1980. Hergenhahn, B.R. An Introduction to the History of Psychology. 6th Edition. Belmont, CA: Wadsworth, 2009. Leahey, T.M. "The mistaken mirror: On Wundt's and Titchener's psychologies." Journal of the History of the Behavioral Sciences, 17, (1981): 273-282. Robinson, Daniel N. Toward a Science of Human Nature. New York, NY: Columbia University Press, 1982. Uttal, William R. The War Between Mentalism and Behaviorism: On the Accessibility of Mental Processes. Mahwah, NJ: Lawrence Erlbaum Associates, Publishers, 2000. Psychological theories
0.776703
0.992552
0.770918
Production (economics)
Production is the process of combining various inputs, both material (such as metal, wood, glass, or plastics) and immaterial (such as plans, or knowledge) in order to create output. Ideally this output will be a good or service which has value and contributes to the utility of individuals. The area of economics that focuses on production is called production theory, and it is closely related to the consumption(or consumer) theory of economics. The production process and output directly result from productively utilising the original inputs (or factors of production). Known as primary producer goods or services, land, labour, and capital are deemed the three fundamental factors of production. These primary inputs are not significantly altered in the output process, nor do they become a whole component in the product. Under classical economics, materials and energy are categorised as secondary factors as they are byproducts of land, labour and capital. Delving further, primary factors encompass all of the resourcing involved, such as land, which includes the natural resources above and below the soil. However, there is a difference between human capital and labour. In addition to the common factors of production, in different economic schools of thought, entrepreneurship and technology are sometimes considered evolved factors in production. It is common practice that several forms of controllable inputs are used to achieve the output of a product. The production function assesses the relationship between the inputs and the quantity of output. Economic welfare is created in a production process, meaning all economic activities that aim directly or indirectly to satisfy human wants and needs. The degree to which the needs are satisfied is often accepted as a measure of economic welfare. In production there are two features which explain increasing economic welfare. The first is improving quality-price-ratio of goods and services and increasing incomes from growing and more efficient market production, and the second is total production which help in increasing GDP. The most important forms of production are: market production public production household production In order to understand the origin of economic well-being, we must understand these three production processes. All of them produce commodities which have value and contribute to the well-being of individuals. The satisfaction of needs originates from the use of the commodities which are produced. The need satisfaction increases when the quality-price-ratio of the commodities improves and more satisfaction is achieved at less cost. Improving the quality-price-ratio of commodities is to a producer an essential way to improve the competitiveness of products but this kind of gains distributed to customers cannot be measured with production data. Improving product competitiveness often means lower prices and to the producer lower producer income, to be compensated with higher sales volume. Economic well-being also increases due to income gains from increasing production. Market production is the only production form that creates and distributes incomes to stakeholders. Public production and household production are financed by the incomes generated in market production. Thus market production has a double role: creating well-being and producing goods and services and income creation. Because of this double role, market production is the “primus motor” of economic well-being. Elements of production economics The underlying assumption of production is that maximisation of profit is the key objective of the producer. The difference in the value of the production values (the output value) and costs (associated with the factors of production) is the calculated profit. Efficiency, technological, pricing, behavioural, consumption and productivity changes are a few of the critical elements that significantly influence production economics. Efficiency Within production, efficiency plays a tremendous role in achieving and maintaining full capacity, rather than producing an inefficient (not optimal) level. Changes in efficiency relate to the positive shift in current inputs, such as technological advancements, relative to the producer's position. Efficiency is calculated by the maximum potential output divided by the actual input. An example of the efficiency calculation is that if the applied inputs have the potential to produce 100 units but are producing 60 units, the efficiency of the output is 0.6, or 60%. Furthermore, economies of scale identify the point at which production efficiency (returns) can be increased, decrease or remain constant. Technological changes This element sees the ongoing adaption of technology at the frontier of the production function. Technological change is a significant determinant in advancing economic production results, as noted throughout economic histories, such as the industrial revolution. Therefore, it is critical to continue to monitor its effects on production and promote the development of new technologies. Behaviour, consumption and productivity There is a strong correlation between the producer's behaviour and the underlying assumption of production – both assume profit maximising behaviour. Production can be either increased, decreased or remain constant as a result of consumption, amongst various other factors. The relationship between production and consumption is mirror against the economic theory of supply and demand. Accordingly, when production decreases more than factor consumption, this results in reduced productivity. Contrarily, a production increase over consumption is seen as increased productivity. Pricing In an economic market, production input and output prices are assumed to be set from external factors as the producer is the price taker. Hence, pricing is an important element in the real-world application of production economics. Should the pricing be too high, the production of the product is simply unviable. There is also a strong link between pricing and consumption, with this influencing the overall production scale. As a source of economic well-being In principle there are two main activities in an economy, production and consumption. Similarly, there are two kinds of actors, producers and consumers. Well-being is made possible by efficient production and by the interaction between producers and consumers. In the interaction, consumers can be identified in two roles both of which generate well-being. Consumers can be both customers of the producers and suppliers to the producers. The customers' well-being arises from the commodities they are buying and the suppliers' well-being is related to the income they receive as compensation for the production inputs they have delivered to the producers. Stakeholders of production Stakeholders of production are persons, groups or organizations with an interest in a producing company. Economic well-being originates in efficient production and it is distributed through the interaction between the company's stakeholders. The stakeholders of companies are economic actors which have an economic interest in a company. Based on the similarities of their interests, stakeholders can be classified into three groups in order to differentiate their interests and mutual relations. The three groups are as follows: Customers The customers of a company are typically consumers, other market producers or producers in the public sector. Each of them has their individual production functions. Due to competition, the price-quality-ratios of commodities tend to improve and this brings the benefits of better productivity to customers. Customers get more for less. In households and the public sector this means that more need satisfaction is achieved at less cost. For this reason, the productivity of customers can increase over time even though their incomes remain unchanged. Suppliers The suppliers of companies are typically producers of materials, energy, capital, and services. They all have their individual production functions. The changes in prices or qualities of supplied commodities have an effect on both actors' (company and suppliers) production functions. We come to the conclusion that the production functions of the company and its suppliers are in a state of continuous change. Producers Those participating in production, i.e., the labour force, society and owners, are collectively referred to as the producer community or producers. The producer community generates income from developing and growing production. The well-being gained through commodities stems from the price-quality relations of the commodities. Due to competition and development in the market, the price-quality relations of commodities tend to improve over time. Typically the quality of a commodity goes up and the price goes down over time. This development favourably affects the production functions of customers. Customers get more for less. Consumer customers get more satisfaction at less cost. This type of well-being generation can only partially be calculated from the production data. The situation is presented in this study. The producer community (labour force, society, and owners) earns income as compensation for the inputs they have delivered to the production. When the production grows and becomes more efficient, the income tends to increase. In production this brings about an increased ability to pay salaries, taxes and profits. The growth of production and improved productivity generate additional income for the producing community. Similarly, the high income level achieved in the community is a result of the high volume of production and its good performance. This type of well-being generation – as mentioned earlier - can be reliably calculated from the production data. Main processes of a producing company A producing company can be divided into sub-processes in different ways; yet, the following five are identified as main processes, each with a logic, objectives, theory and key figures of its own. It is important to examine each of them individually, yet, as a part of the whole, in order to be able to measure and understand them. The main processes of a company are as follows: real process. income distribution process production process. monetary process. market value process. Production output is created in the real process, gains of production are distributed in the income distribution process and these two processes constitute the production process. The production process and its sub-processes, the real process and income distribution process occur simultaneously, and only the production process is identifiable and measurable by the traditional accounting practices. The real process and income distribution process can be identified and measured by extra calculation, and this is why they need to be analyzed separately in order to understand the logic of production and its performance. Real process generates the production output from input, and it can be described by means of the production function. It refers to a series of events in production in which production inputs of different quality and quantity are combined into products of different quality and quantity. Products can be physical goods, immaterial services and most often combinations of both. The characteristics created into the product by the producer imply surplus value to the consumer, and on the basis of the market price this value is shared by the consumer and the producer in the marketplace. This is the mechanism through which surplus value originates to the consumer and the producer likewise. Surplus values to customers cannot be measured from any production data. Instead the surplus value to a producer can be measured. It can be expressed both in terms of nominal and real values. The real surplus value to the producer is an outcome of the real process, real income, and measured proportionally it means productivity. The concept “real process” in the meaning quantitative structure of production process was introduced in Finnish management accounting in the 1960s. Since then it has been a cornerstone in the Finnish management accounting theory. (Riistama et al. 1971) Income distribution process of the production refers to a series of events in which the unit prices of constant-quality products and inputs alter causing a change in income distribution among those participating in the exchange. The magnitude of the change in income distribution is directly proportionate to the change in prices of the output and inputs and to their quantities. Productivity gains are distributed, for example, to customers as lower product sales prices or to staff as higher income pay. The production process consists of the real process and the income distribution process. A result and a criterion of success of the owner is profitability. The profitability of production is the share of the real process result the owner has been able to keep to himself in the income distribution process. Factors describing the production process are the components of profitability, i.e., returns and costs. They differ from the factors of the real process in that the components of profitability are given at nominal prices whereas in the real process the factors are at periodically fixed prices. Monetary process refers to events related to financing the business. Market value process refers to a series of events in which investors determine the market value of the company in the investment markets. Production growth and performance Economic growth may be defined as a production increase of an output of a production process. It is usually expressed as a growth percentage depicting growth of the real production output. The real output is the real value of products produced in a production process and when we subtract the real input from the real output we get the real income. The real output and the real income are generated by the real process of production from the real inputs. The real process can be described by means of the production function. The production function is a graphical or mathematical expression showing the relationship between the inputs used in production and the output achieved. Both graphical and mathematical expressions are presented and demonstrated. The production function is a simple description of the mechanism of income generation in production process. It consists of two components. These components are a change in production input and a change in productivity. The figure illustrates an income generation process (exaggerated for clarity). The Value T2 (value at time 2) represents the growth in output from Value T1 (value at time 1). Each time of measurement has its own graph of the production function for that time (the straight lines). The output measured at time 2 is greater than the output measured at time one for both of the components of growth: an increase of inputs and an increase of productivity. The portion of growth caused by the increase in inputs is shown on line 1 and does not change the relation between inputs and outputs. The portion of growth caused by an increase in productivity is shown on line 2 with a steeper slope. So increased productivity represents greater output per unit of input. The growth of production output does not reveal anything about the performance of the production process. The performance of production measures production's ability to generate income. Because the income from production is generated in the real process, we call it the real income. Similarly, as the production function is an expression of the real process, we could also call it “income generated by the production function”. The real income generation follows the logic of the production function. Two components can also be distinguished in the income change: the income growth caused by an increase in production input (production volume) and the income growth caused by an increase in productivity. The income growth caused by increased production volume is determined by moving along the production function graph. The income growth corresponding to a shift of the production function is generated by the increase in productivity. The change of real income so signifies a move from the point 1 to the point 2 on the production function (above). When we want to maximize the production performance we have to maximize the income generated by the production function. The sources of productivity growth and production volume growth are explained as follows. Productivity growth is seen as the key economic indicator of innovation. The successful introduction of new products and new or altered processes, organization structures, systems, and business models generates growth of output that exceeds the growth of inputs. This results in growth in productivity or output per unit of input. Income growth can also take place without innovation through replication of established technologies. With only replication and without innovation, output will increase in proportion to inputs. (Jorgenson et al. 2014, 2) This is the case of income growth through production volume growth. Jorgenson et al. (2014, 2) give an empiric example. They show that the great preponderance of economic growth in the US since 1947 involves the replication of existing technologies through investment in equipment, structures, and software and expansion of the labor force. Further, they show that innovation accounts for only about twenty percent of US economic growth. In the case of a single production process (described above) the output is defined as an economic value of products and services produced in the process. When we want to examine an entity of many production processes we have to sum up the value-added created in the single processes. This is done in order to avoid the double accounting of intermediate inputs. Value-added is obtained by subtracting the intermediate inputs from the outputs. The most well-known and used measure of value-added is the GDP (Gross Domestic Product). It is widely used as a measure of the economic growth of nations and industries. Absolute (total) and average income The production performance can be measured as an average or an absolute income. Expressing performance both in average (avg.) and absolute (abs.) quantities is helpful for understanding the welfare effects of production. For measurement of the average production performance, we use the known productivity ratio Real output / Real input. The absolute income of performance is obtained by subtracting the real input from the real output as follows: Real income (abs.) = Real output – Real input The growth of the real income is the increase of the economic value that can be distributed between the production stakeholders. With the aid of the production model we can perform the average and absolute accounting in one calculation. Maximizing production performance requires using the absolute measure, i.e. the real income and its derivatives as a criterion of production performance. Maximizing productivity also leads to the phenomenon called "jobless growth" This refers to economic growth as a result of productivity growth but without creation of new jobs and new incomes from them. A practical example illustrates the case. When a jobless person obtains a job in market production we may assume it is a low productivity job. As a result, average productivity decreases but the real income per capita increases. Furthermore, the well-being of the society also grows. This example reveals the difficulty to interpret the total productivity change correctly. The combination of volume increase and total productivity decrease leads in this case to the improved performance because we are on the “diminishing returns” area of the production function. If we are on the part of “increasing returns” on the production function, the combination of production volume increase and total productivity increase leads to improved production performance. Unfortunately, we do not know in practice on which part of the production function we are. Therefore, a correct interpretation of a performance change is obtained only by measuring the real income change. Production function In the short run, the production function assumes there is at least one fixed factor input. The production function relates the quantity of factor inputs used by a business to the amount of output that result. There are three measure of production and productivity. The first one is total output (total product). It is straightforward to measure how much output is being produced in the manufacturing industries like motor vehicles. In the tertiary industry such as service or knowledge industries, it is harder to measure the outputs since they are less tangible. The second way of measuring production and efficiency is average output. It measures output per-worker-employed or output-per-unit of capital. The third measures of production and efficiency is the marginal product. It is the change in output from increasing the number of workers used by one person, or by adding one more machine to the production process in the short run. The law of diminishing marginal returns points out that as more units of a variable input are added to fixed amounts of land and capital, the change in total output would rise firstly and then fall. The length of time required for all the factor of production to be flexible varies from industry to industry. For example, in the nuclear power industry, it takes many years to commission new nuclear power plant and capacity. Real-life examples of the firm's short - term production equations may not be quite the same as the smooth production theory of the department. In order to improve efficiency and promote the structural transformation of economic growth, it is most important to establish the industrial development model related to it. At the same time, a shift should be made to models that contain typical characteristics of the industry, such as specific technological changes and significant differences in the likelihood of substitution before and after investment. Production models A production model is a numerical description of the production process and is based on the prices and the quantities of inputs and outputs. There are two main approaches to operationalize the concept of production function. We can use mathematical formulae, which are typically used in macroeconomics (in growth accounting) or arithmetical models, which are typically used in microeconomics and management accounting. We do not present the former approach here but refer to the survey “Growth accounting” by Hulten 2009. Also see an extensive discussion of various production models and their estimations in Sickles and Zelenyuk (2019, Chapter 1-2). We use here arithmetical models because they are like the models of management accounting, illustrative and easily understood and applied in practice. Furthermore, they are integrated to management accounting, which is a practical advantage. A major advantage of the arithmetical model is its capability to depict production function as a part of production process. Consequently, production function can be understood, measured, and examined as a part of production process. There are different production models according to different interests. Here we use a production income model and a production analysis model in order to demonstrate production function as a phenomenon and a measureable quantity. Production income model The scale of success run by a going concern is manifold, and there are no criteria that might be universally applicable to success. Nevertheless, there is one criterion by which we can generalise the rate of success in production. This criterion is the ability to produce surplus value. As a criterion of profitability, surplus value refers to the difference between returns and costs, taking into consideration the costs of equity in addition to the costs included in the profit and loss statement as usual. Surplus value indicates that the output has more value than the sacrifice made for it, in other words, the output value is higher than the value (production costs) of the used inputs. If the surplus value is positive, the owner's profit expectation has been surpassed. The table presents a surplus value calculation. We call this set of production data a basic example and we use the data through the article in illustrative production models. The basic example is a simplified profitability calculation used for illustration and modelling. Even as reduced, it comprises all phenomena of a real measuring situation and most importantly the change in the output-input mix between two periods. Hence, the basic example works as an illustrative “scale model” of production without any features of a real measuring situation being lost. In practice, there may be hundreds of products and inputs but the logic of measuring does not differ from that presented in the basic example. In this context, we define the quality requirements for the production data used in productivity accounting. The most important criterion of good measurement is the homogenous quality of the measurement object. If the object is not homogenous, then the measurement result may include changes in both quantity and quality but their respective shares will remain unclear. In productivity accounting this criterion requires that every item of output and input must appear in accounting as being homogenous. In other words, the inputs and the outputs are not allowed to be aggregated in measuring and accounting. If they are aggregated, they are no longer homogenous and hence the measurement results may be biased. Both the absolute and relative surplus value have been calculated in the example. Absolute value is the difference of the output and input values and the relative value is their relation, respectively. The surplus value calculation in the example is at a nominal price, calculated at the market price of each period. Production analysis model A model used here is a typical production analysis model by help of which it is possible to calculate the outcome of the real process, income distribution process and production process. The starting point is a profitability calculation using surplus value as a criterion of profitability. The surplus value calculation is the only valid measure for understanding the connection between profitability and productivity or understanding the connection between real process and production process. A valid measurement of total productivity necessitates considering all production inputs, and the surplus value calculation is the only calculation to conform to the requirement. If we omit an input in productivity or income accounting, this means that the omitted input can be used unlimitedly in production without any cost impact on accounting results. Accounting and interpreting The process of calculating is best understood by applying the term ceteris paribus, i.e. "all other things being the same," stating that at a time only the impact of one changing factor be introduced to the phenomenon being examined. Therefore, the calculation can be presented as a process advancing step by step. First, the impacts of the income distribution process are calculated, and then, the impacts of the real process on the profitability of the production. The first step of the calculation is to separate the impacts of the real process and the income distribution process, respectively, from the change in profitability (285.12 – 266.00 = 19.12). This takes place by simply creating one auxiliary column (4) in which a surplus value calculation is compiled using the quantities of Period 1 and the prices of Period 2. In the resulting profitability calculation, Columns 3 and 4 depict the impact of a change in income distribution process on the profitability and in Columns 4 and 7 the impact of a change in real process on the profitability. The accounting results are easily interpreted and understood. We see that the real income has increased by 58.12 units from which 41.12 units come from the increase of productivity growth and the rest 17.00 units come from the production volume growth. The total increase of real income (58.12) is distributed to the stakeholders of production, in this case, 39.00 units to the customers and to the suppliers of inputs and the rest 19.12 units to the owners. Here we can make an important conclusion. Income formation of production is always a balance between income generation and income distribution. The income change created in a real process (i.e. by production function) is always distributed to the stakeholders as economic values within the review period. Accordingly, the changes in real income and income distribution are always equal in terms of economic value. Based on the accounted changes of productivity and production volume values we can explicitly conclude on which part of the production function the production is. The rules of interpretations are the following: The production is on the part of “increasing returns” on the production function, when productivity and production volume increase or productivity and production volume decrease The production is on the part of “diminishing returns” on the production function, when productivity decreases and volume increases or productivity increases and volume decreases. In the basic example, the combination of volume growth (+17.00) and productivity growth (+41.12) reports explicitly that the production is on the part of “increasing returns” on the production function (Saari 2006 a, 138–144). Another production model (Production Model Saari 1989) also gives details of the income distribution (Saari 2011,14). Because the accounting techniques of the two models are different, they give differing, although complementary, analytical information. The accounting results are, however, identical. We do not present the model here in detail but we only use its detailed data on income distribution, when the objective functions are formulated in the next section. Objective functions An efficient way to improve the understanding of production performance is to formulate different objective functions according to the objectives of the different interest groups. Formulating the objective function necessitates defining the variable to be maximized (or minimized). After that other variables are considered as constraints or free variables. The most familiar objective function is profit maximization which is also included in this case. Profit maximization is an objective function that stems from the owner's interest and all other variables are constraints in relation to maximizing of profits in the organization. The procedure for formulating objective functions The procedure for formulating different objective functions, in terms of the production model, is introduced next. In the income formation from production the following objective functions can be identified: Maximizing the real income Maximizing the producer income Maximizing the owner income. These cases are illustrated using the numbers from the basic example. The following symbols are used in the presentation: The equal sign (=) signifies the starting point of the computation or the result of computing and the plus or minus sign (+ / -) signifies a variable that is to be added or subtracted from the function. A producer means here the producer community, i.e. labour force, society and owners. Objective function formulations can be expressed in a single calculation which concisely illustrates the logic of the income generation, the income distribution and the variables to be maximized. The calculation resembles an income statement starting with the income generation and ending with the income distribution. The income generation and the distribution are always in balance so that their amounts are equal. In this case, it is 58.12 units. The income which has been generated in the real process is distributed to the stakeholders during the same period. There are three variables that can be maximized. They are the real income, the producer income and the owner income. Producer income and owner income are practical quantities because they are addable quantities and they can be computed quite easily. Real income is normally not an addable quantity and in many cases it is difficult to calculate. The dual approach for the formulation Here we have to add that the change of real income can also be computed from the changes in income distribution. We have to identify the unit price changes of outputs and inputs and calculate their profit impacts (i.e. unit price change x quantity). The change of real income is the sum of these profit impacts and the change of owner income. This approach is called the dual approach because the framework is seen in terms of prices instead of quantities (ONS 3, 23). The dual approach has been recognized in growth accounting for long but its interpretation has remained unclear. The following question has remained unanswered: “Quantity based estimates of the residual are interpreted as a shift in the production function, but what is the interpretation of the price-based growth estimates?” (Hulten 2009, 18). We have demonstrated above that the real income change is achieved by quantitative changes in production and the income distribution change to the stakeholders is its dual. In this case, the duality means that the same accounting result is obtained by accounting the change of the total income generation (real income) and by accounting the change of the total income distribution. See also Adaptive strategies A list of production functions Assembly line Johann Heinrich von Thünen Division of labour Industrial Revolution Cost-of-production theory of value Computer-aided manufacturing DIRTI 5 Distribution (economics) Factors of production Outline of industrial organization Outline of production Output (economics) Price Prices of production Pricing strategies Product (business) Production function Production theory basics Production possibility frontier Productive and unproductive labour Productive forces Productivism Productivity Productivity model Productivity improving technologies (historical) Microeconomics Mode of production Mass production Second Industrial Revolution Footnotes References Sickles, R., and Zelenyuk, V. (2019). Measurement of Productivity and Efficiency: Theory and Practice. Cambridge: Cambridge University Press. Further references and external links Moroney, J. R. (1967) "Cobb-Douglass production functions and returns to scale in US manufacturing industry", Western Economic Journal, vol 6, no 1, December 1967, pp 39–51. Pearl, D. and Enos, J. (1975) "Engineering production functions and technological progress", The Journal of Industrial Economics, vol 24, September 1975, pp 55–72. Robinson, J. (1953) "The production function and the theory of capital", Review of Economic Studies, vol XXI, 1953, pp. 81–106 Anwar Shaikh, "Laws of Production and Laws of Algebra: The Humbug Production Function", in The Review of Economics and Statistics, Volume 56(1), February 1974, pp. 115–120. Wayback Machine Anwar Shaikh, "Laws of Production and Laws of Algebra – Humbug II", in Growth, Profits and Property ed. by Edward J. Nell. Cambridge, Cambridge University Press, 1980. Wayback Machine Anwar Shaikh, "Nonlinear Dynamics and Pseudo-Production Functions", 2008. Shephard, R (1970). Theory of cost and production functions, Princeton University Press, Princeton NJ. Sickles, R., and Zelenyuk, V. (2019). "Measurement of Productivity and Efficiency: Theory and Practice". Cambridge: Cambridge University Press. Thompson, A. (1981). Economics of the firm, Theory and practice, 3r d edition, Prentice Hall, Englewood Cliffs. Elmer G. Wiens: Production Functions – Models of the Cobb-Douglas, C.E.S., Trans-Log, and Diewert Production Functions. Production economics
0.773661
0.996443
0.770909
Kinetic isotope effect
In physical organic chemistry, a kinetic isotope effect (KIE) is the change in the reaction rate of a chemical reaction when one of the atoms in the reactants is replaced by one of its isotopes. Formally, it is the ratio of rate constants for the reactions involving the light (k) and the heavy (k) isotopically substituted reactants (isotopologues): KIE = k/k. This change in reaction rate is a quantum effect that occurs mainly because heavier isotopologues have lower vibrational frequencies than their lighter counterparts. In most cases, this implies a greater energy input needed for heavier isotopologues to reach the transition state (or, in rare cases, dissociation limit), and therefore, a slower reaction rate. The study of KIEs can help elucidate reaction mechanisms, and is occasionally exploited in drug development to improve unfavorable pharmacokinetics by protecting metabolically vulnerable C-H bonds. Background KIE is considered one of the most essential and sensitive tools for studying reaction mechanisms, the knowledge of which allows improvement of the desirable qualities of said reactions. For example, KIEs can be used to reveal whether a nucleophilic substitution reaction follows a unimolecular (S1) or bimolecular (S2) pathway. In the reaction of methyl bromide and cyanide (shown in the introduction), the observed methyl carbon KIE indicates an S2 mechanism. Depending on the pathway, different strategies may be used to stabilize the transition state of the rate-determining step of the reaction and improve the reaction rate and selectivity, which are important for industrial applications. Isotopic rate changes are most pronounced when the relative mass change is greatest, since the effect is related to vibrational frequencies of the affected bonds. Thus, replacing normal hydrogen (H) with its isotope deuterium (D or H), doubles the mass; whereas in replacing carbon-12 with carbon-13, the mass increases by only 8%. The rate of a reaction involving a C–H bond is typically 6–10x faster than with a C–H bond, whereas a C reaction is only 4% faster than the corresponding C reaction; even though, in both cases, the isotope is one atomic mass unit (amu) (dalton) heavier. Isotopic substitution can modify the reaction rate in a variety of ways. In many cases, the rate difference can be rationalized by noting that the mass of an atom affects the vibrational frequency of the chemical bond that it forms, even if the potential energy surface for the reaction is nearly identical. Heavier isotopes will (classically) lead to lower vibration frequencies, or, viewed quantum mechanically, have lower zero-point energy (ZPE). With a lower ZPE, more energy must be supplied to break the bond, resulting in a higher activation energy for bond cleavage, which in turn lowers the measured rate (see, for example, the Arrhenius equation). Classification Primary kinetic isotope effects A primary kinetic isotope effect (PKIE) may be found when a bond to the isotopically labeled atom is being formed or broken. Depending on the way a KIE is probed (parallel measurement of rates vs. intermolecular competition vs. intramolecular competition), the observation of a PKIE is indicative of breaking/forming a bond to the isotope at the rate-limiting step, or subsequent product-determining step(s). (The misconception that a PKIE must reflect bond cleavage/formation to the isotope at the rate-limiting step is often repeated in textbooks and the primary literature: see the section on experiments below.) For the aforementioned nucleophilic substitution reactions, PKIEs have been investigated for both the leaving groups, the nucleophiles, and the α-carbon at which the substitution occurs. Interpretation of the leaving group KIEs was difficult at first due to significant contributions from temperature independent factors. KIEs at the α-carbon can be used to develop some understanding into the symmetry of the transition state in S2 reactions, though this KIE is less sensitive than what would be ideal, also due to contribution from non-vibrational factors. Secondary kinetic isotope effects A secondary kinetic isotope effect (SKIE) is observed when no bond to the isotopically labeled atom in the reactant is broken or formed. SKIEs tend to be much smaller than PKIEs; however, secondary deuterium isotope effects can be as large as 1.4 per H atom, and techniques have been developed to measure heavy-element isotope effects to very high precision, so SKIEs are still very useful for elucidating reaction mechanisms. For the aforementioned nucleophilic substitution reactions, secondary hydrogen KIEs at the α-carbon provide a direct means to distinguish between S1 and S2 reactions. It has been found that S1 reactions typically lead to large SKIEs, approaching to their theoretical maximum at about 1.22, while S2 reactions typically yield SKIEs that are very close to or less than 1. KIEs greater than 1 are called normal kinetic isotope effects, while KIEs less than 1 are called inverse kinetic isotope effects (IKIE). In general, smaller force constants in the transition state are expected to yield a normal KIE, and larger force constants in the transition state are expected to yield an IKIE when stretching vibrational contributions dominate the KIE. The magnitudes of such SKIEs at the α-carbon atom are largely determined by the C-H(H) vibrations. For an S1 reaction, since the carbon atom is converted into an sp hybridized carbenium ion during the transition state for the rate-determining step with an increase in C-H(H) bond order, an IKIE would be expected if only the stretching vibrations were important. The observed large normal KIEs are found to be caused by significant out-of-plane bending vibrational contributions when going from the reactants to the transition state of carbenium ion formation. For S2 reactions, bending vibrations still play an important role for the KIE, but stretching vibrational contributions are of more comparable magnitude, and the resulting KIE may be normal or inverse depending on the specific contributions of the respective vibrations. Theory The theoretical treatment of isotope effects relies heavily on transition state theory, which assumes a single potential energy surface for the reaction, and a barrier between the reactants and the products on this surface, on top of which resides the transition state. The KIE arises largely from the changes to vibrational ground states produced by the isotopic perturbation along the minimum energy pathway of the potential energy surface, which may only be accounted for with quantum mechanical treatments of the system. Depending on the mass of the atom that moves along the reaction coordinate and nature (width and height) of the energy barrier, quantum tunnelling may also make a large contribution to an observed kinetic isotope effect and may need to be separately considered, in addition to the "semi-classical" transition state theory model. The deuterium kinetic isotope effect (H KIE) is by far the most common, useful, and well-understood type of KIE. The accurate prediction of the numerical value of a H KIE using density functional theory calculations is now fairly routine. Moreover, several qualitative and semi-quantitative models allow rough estimates of deuterium isotope effects to be made without calculations, often providing enough information to rationalize experimental data or even support or refute different mechanistic possibilities. Starting materials containing H are often commercially available, making the synthesis of isotopically enriched starting materials relatively straightforward. Also, due to the large relative difference in the mass of H and H and the attendant differences in vibrational frequency, the isotope effect is larger than for any other pair of isotopes except H and H, allowing both primary and secondary isotope effects to be easily measured and interpreted. In contrast, secondary effects are generally very small for heavier elements and close in magnitude to the experimental uncertainty, which complicates their interpretation and limits their utility. In the context of isotope effects, hydrogen often means the light isotope, protium (H), specifically. In the rest of this article, reference to hydrogen and deuterium in parallel grammatical constructions or direct comparisons between them should be interpreted as meaning H and H. The theory of KIEs was first formulated by Jacob Bigeleisen in 1949. Bigeleisen's general formula for H KIEs (which is also applicable to heavier elements) is given below. It employs transition state theory and a statistical mechanical treatment of translational, rotational, and vibrational levels for the calculation of rate constants k and k. However, this formula is "semi-classical" in that it neglects the contribution from quantum tunneling, which is often introduced as a separate correction factor. Bigeleisen's formula also does not deal with differences in non-bonded repulsive interactions caused by the slightly shorter C–D bond compared to a C–H bond. In the equation, subscript H or D refer to the species with H or H, respectively; quantities with or without the double-dagger, ‡, refer to transition state or reactant ground state, respectively. (Strictly speaking, a term resulting from an isotopic difference in transmission coefficients should also be included.) , where we define and . Here, h = Planck constant; k = Boltzmann constant; = frequency of vibration, expressed in wavenumber; c = speed of light; N = Avogadro constant; and R = universal gas constant. The σ (X = H or D) are the symmetry numbers for the reactants and transition states. The M are the molecular masses of the corresponding species, and the I (q = x, y, or z) terms are the moments of inertia about the three principal axes. The u are directly proportional to the corresponding vibrational frequencies, ν, and the vibrational zero-point energy (ZPE) (see below). The integers N and N are the number of atoms in the reactants and the transition states, respectively. The complicated expression given above can be represented as the product of four separate factors: . For the special case of H isotope effects, we will argue that the first three terms can be treated as equal to or well approximated by unity. The first factor S (containing σ) is the ratio of the symmetry numbers for the various species. This will be a rational number (a ratio of integers) that depends on the number of molecular and bond rotations leading to the permutation of identical atoms or groups in the reactants and the transition state. For systems of low symmetry, all σ (reactant and transition state) will be unity; thus S can often be neglected. The MMI factor (containing the M and I) refers to the ratio of the molecular masses and the moments of inertia. Since hydrogen and deuterium tend to be much lighter than most reactants and transition states, there is little difference in the molecular masses and moments of inertia between H and D containing molecules, so the MMI factor is usually also approximated as unity. The EXC factor (containing the product of vibrational partition functions) corrects for the KIE caused by the reactions of vibrationally excited molecules. The fraction of molecules with enough energy to have excited state A–H/D bond vibrations is generally small for reactions at or near room temperature (bonds to hydrogen usually vibrate at 1000 cm or higher, so exp(-u) = exp(-hν/kT) < 0.01 at 298 K, resulting in negligible contributions from the 1–exp(-u) factors). Hence, for hydrogen/deuterium KIEs, the observed values are typically dominated by the last factor, ZPE (an exponential function of vibrational ZPE differences), consisting of contributions from the ZPE differences for each of the vibrational modes of the reactants and transition state, which can be represented as follows: , where we define and . The sums in the exponent of the second expression can be interpreted as running over all vibrational modes of the reactant ground state and the transition state. Or, one may interpret them as running over those modes unique to the reactant or the transition state or whose vibrational frequencies change substantially upon advancing along the reaction coordinate. The remaining pairs of reactant and transition state vibrational modes have very similar and , and cancellations occur when the sums in the exponent are calculated. Thus, in practice, H KIEs are often largely dependent on a handful of key vibrational modes because of this cancellation, making qualitative analyses of k/k possible. As mentioned, especially for H/H substitution, most KIEs arise from the difference in ZPE between the reactants and the transition state of the isotopologues; this difference can be understood qualitatively as follows: in the Born–Oppenheimer approximation, the potential energy surface is the same for both isotopic species. However, a quantum treatment of the energy introduces discrete vibrational levels onto this curve, and the lowest possible energy state of a molecule corresponds to the lowest vibrational energy level, which is slightly higher in energy than the minimum of the potential energy curve. This difference, known as the ZPE, is a manifestation of the uncertainty principle that necessitates an uncertainty in the C-H or C-D bond length. Since the heavier (in this case the deuterated) species behaves more "classically", its vibrational energy levels are closer to the classical potential energy curve, and it has a lower ZPE. The ZPE differences between the two isotopic species, at least in most cases, diminish in the transition state, since the bond force constant decreases during bond breaking. Hence, the lower ZPE of the deuterated species translates into a larger activation energy for its reaction, as shown in the following figure, leading to a normal KIE. This effect should, in principle, be taken into account all 3N−6 vibrational modes for the starting material and 3N−7 vibrational modes at the transition state (one mode, the one corresponding to the reaction coordinate, is missing at the transition state, since a bond breaks and there is no restorative force against the motion). The harmonic oscillator is a good approximation for a vibrating bond, at least for low-energy vibrational states. Quantum mechanics gives the vibrational ZPE as . Thus, we can readily interpret the factor of and the sums of terms over ground state and transition state vibrational modes in the exponent of the simplified formula above. For a harmonic oscillator, vibrational frequency is inversely proportional to the square root of the reduced mass of the vibrating system: , where k is the force constant. Moreover, the reduced mass is approximated by the mass of the light atom of the system, X = H or D. Because m ≈ 2m, . In the case of homolytic C–H/D bond dissociation, the transition state term disappears; and neglecting other vibrational modes, k/k = exp(Δu). Thus, a larger isotope effect is observed for a stiffer ("stronger") C–H/D bond. For most reactions of interest, a hydrogen atom is transferred between two atoms, with a transition-state [A···H···B] and vibrational modes at the transition state need to be accounted for. Nevertheless, it is still generally true that cleavage of a bond with a higher vibrational frequency will give a larger isotope effect. To calculate the maximum possible value for a non-tunneling H KIE, we consider the case where the ZPE difference between the stretching vibrations of a C-H bond (3000 cm) and C-H bond (2200 cm) disappears in the transition state (an energy difference of [3000 – 2200 cm]/2 = 400 cm ≈ 1.15 kcal/mol), without any compensation from a ZPE difference at the transition state (e.g., from the symmetric A···H···B stretch, which is unique to the transition state). The simplified formula above, predicts a maximum for k/k as 6.9. If the complete disappearance of two bending vibrations is also included, k/k values as large as 15-20 can be predicted. Bending frequencies are very unlikely to vanish in the transition state, however, and there are only a few cases in which k/k values exceed 7-8 near room temperature. Furthermore, it is often found that tunneling is a major factor when they do exceed such values. A value of k/k ~ 10 is thought to be maximal for a semi-classical PKIE (no tunneling) for reactions at ≈298 K. (The formula for k/k has a temperature dependence, so larger isotope effects are possible at lower temperatures.) Depending on the nature of the transition state of H-transfer (symmetric vs. "early" or "late" and linear vs. bent); the extent to which a primary H isotope effect approaches this maximum, varies. A model developed by Westheimer predicted that symmetrical (thermoneutral, by Hammond's postulate), linear transition states have the largest isotope effects, while transition states that are "early" or "late" (for exothermic or endothermic reactions, respectively), or nonlinear (e.g. cyclic) exhibit smaller effects. These predictions have since received extensive experimental support. For secondary H isotope effects, Streitwieser proposed that weakening (or strengthening, in the case of an inverse isotope effect) of bending modes from the reactant ground state to the transition state are largely responsible for observed isotope effects. These changes are attributed to a change in steric environment when the carbon bound to the H/D undergoes rehybridization from sp to sp or vice versa (an α SKIE), or bond weakening due to hyperconjugation in cases where a carbocation is being generated one carbon atom away (a β SKIE). These isotope effects have a theoretical maximum of k/k = 2 ≈ 1.4. For a SKIE at the α position, rehybridization from sp to sp produces a normal isotope effect, while rehybridization from sp to sp results in an inverse isotope effect with a theoretical minimum of k/k = 2 ≈ 0.7. In practice, k/k ~ 1.1-1.2 and k/k ~ 0.8-0.9 are typical for α SKIEs, while k/k ~ 1.15-1.3 are typical for β SKIE. For reactants containing several isotopically substituted β-hydrogens, the observed isotope effect is often the result of several H/D's at the β position acting in concert. In these cases, the effect of each isotopically labeled atom is multiplicative, and cases where k/k > 2 are not uncommon. The following simple expressions relating H and H KIEs, which are also known as the Swain equation (or the Swain-Schaad-Stivers equations), can be derived from the general expression given above using some simplifications: ; i.e., . In deriving these expressions, the reasonable approximation that reduced mass roughly equals the mass of the H, H, or H, was used. Also, the vibrational motion was assumed to be approximated by a harmonic oscillator, so that ; X = H. The subscript "s" refers to these "semi-classical" KIEs, which disregard quantum tunneling. Tunneling contributions must be treated separately as a correction factor. For isotope effects involving elements other than hydrogen, many of these simplifications are not valid, and the magnitude of the isotope effect may depend strongly on some or all of the neglected factors. Thus, KIEs for elements other than hydrogen are often much harder to rationalize or interpret. In many cases and especially for hydrogen-transfer reactions, contributions to KIEs from tunneling are significant (see below). Tunneling In some cases, a further rate enhancement is seen for the lighter isotope, possibly due to quantum tunneling. This is typically only observed for reactions involving bonds to hydrogen. Tunneling occurs when a molecule penetrates through a potential energy barrier rather than over it. Though not allowed by classical mechanics, particles can pass through classically forbidden regions of space in quantum mechanics based on wave–particle duality. Tunneling can be analyzed using Bell's modification of the Arrhenius equation, which includes the addition of a tunneling factor, Q: where A is the Arrhenius parameter, E is the barrier height and where and Examination of the β term shows exponential dependence on the particle's mass. As a result, tunneling is much more likely for a lighter particle such as hydrogen. Simply doubling the mass of a tunneling proton by replacing it with a deuteron drastically reduces the rate of such reactions. As a result, very large KIEs are observed that can not be accounted for by differences in ZPEs. Also, the β term depends linearly with barrier width, 2a. As with mass, tunneling is greatest for small barrier widths. Optimal tunneling distances of protons between donor and acceptor atom is 40 pm. Transient kinetic isotope effect Isotopic effect expressed with the equations given above only refer to reactions that can be described with first-order kinetics. In all instances in which this is not possible, transient KIEs should be taken into account using the GEBIK and GEBIF equations. Experiments Simmons and Hartwig refer to the following three cases as the main types of KIE experiments involving C-H bond functionalization: A) KIE determined from absolute rates of two parallel reactions In this experiment, the rate constants for the normal substrate and its isotopically labeled analogue are determined independently, and the KIE is obtained as a ratio of the two. The accuracy of the measured KIE is severely limited by the accuracy with which each of these rate constants can be measured. Furthermore, reproducing the exact conditions in the two parallel reactions can be very challenging. Nevertheless, a measurement of a large kinetic isotope effect through direct comparison of rate constants is indicative that C-H bond cleavage occurs at the rate-determining step. (A smaller value could indicate an isotope effect due to a pre-equilibrium, so that the C-H bond cleavage occurs somewhere before the rate-determining step.) B) KIE determined from an intermolecular competition This type of experiment, uses the same substrates as used in Experiment A, but they are allowed in to react in the same container, instead of two separate containers. The KIE in this experiment is determined by the relative amount of products formed from C-H versus C-D functionalization (or it can be inferred from the relative amounts of unreacted starting materials). One must quench the reaction before it goes to completion to observe the KIE (see the Evaluation section below). Generally, the reaction is halted at low conversion (~5 to 10% conversion) or a large excess (> 5 equiv.) of the isotopic mixture is used. This experiment type ensures that both C-H and C-D bond functionalizations occur under exactly the same conditions, and the ratio of products from C-H and C-D bond functionalizations can be measured with much greater precision than the rate constants in Experiment A. Moreover, only a single measurement of product concentrations from a single sample is required. However, an observed kinetic isotope effect from this experiment is more difficult to interpret, since it may either mean that C-H bond cleavage occurs during the rate-determining step or at a product-determining step ensuing the rate-determining step. The absence of a KIE, at least according to Simmons and Hartwig, is nonetheless indicative of the C-H bond cleavage not occurring during the rate-determining step. C) KIE determined from an intramolecular competition This type of experiment is analogous to Experiment B, except this time there is an intramolecular competition for the C-H or C-D bond functionalization. In most cases, the substrate possesses a directing group (DG) between the C-H and C-D bonds. Calculation of the KIE from this experiment and its interpretation follow the same considerations as that of Experiment B. However, the results of Experiments B and C will differ if the irreversible binding of the isotope-containing substrate takes place in Experiment B prior to the cleavage of the C-H or C-D bond. In such a scenario, an isotope effect may be observed in Experiment C (where choice of the isotope can take place even after substrate binding) but not in Experiment B (since the choice of whether C-H or C-D bond cleaves is already made as soon as the substrate binds irreversibly). In contrast to Experiment B, the reaction need not be halted at low consumption of isotopic starting material to obtain an accurate k/k, since the ratio of H and D in the starting material is 1:1, regardless of the extent of conversion. One non-C-H activation example of different isotope effects being observed in the case of intermolecular (Experiment B) and intramolecular (Experiment C) competition is the photolysis of diphenyldiazomethane in the presence of t-butylamine. To explain this result, the formation of diphenylcarbene, followed by irreversible nucleophilic attack by t-butylamine was proposed. Because there is little isotopic difference in the rate of nucleophilic attack, the intermolecular experiment resulted in a KIE close to 1. In the intramolecular case, however, the product ratio is determined by the proton transfer that occurs after the nucleophilic attack, a process which has a substantial KIE of 2.6. Thus, Experiments A, B, and C will give results of differing levels of precision and require different experimental setup and ways of analyzing data. As a result, the feasibility of each type of experiment depends on the kinetic and stoichiometric profile of the reaction, as well as the physical characteristics of the reaction mixture (e.g. homogeneous vs. heterogeneous). Moreover, as noted in the paragraph above, the experiments provide KIE data for different steps of a multi-step reaction, depending on the relative locations of the rate-limiting step, product-determining steps, and/or C-H/D cleavage step. The hypothetical examples below illustrate common scenarios. Consider the following reaction coordinate diagram. For a reaction with this profile, all three experiments (A, B, and C) will yield a significant primary KIE: On the other hand, if a reaction follows the following energy profile, in which the C-H or C-D bond cleavage is irreversible but occurs after the rate-determining step (RDS), no significant KIE will be observed with Experiment A, since the overall rate is not affected by the isotopic substitution. Nevertheless, the irreversible C-H bond cleavage step will give a primary KIE with the other two experiments, since the second step would still affect the product distribution. Therefore, with Experiments B and C, it is possible to observe the KIE even if C-H or C-D bond cleavage occurs not in the rate-determining step, but in the product-determining step. Evaluation of rate constant ratios from intermolecular competition reactions In competition reactions, KIE is calculated from isotopic product or remaining reactant ratios after the reaction, but these ratios depend strongly on the extent of completion of the reaction. Most often, the isotopic substrate consists of molecules labeled in a specific position and their unlabeled, ordinary counterparts. One can also, in case of C KIEs, as well as similar cases, simply rely on the natural abundance of the isotopic carbon for the KIE experiments, eliminating the need for isotopic labeling. The two isotopic substrates will react through the same mechanism, but at different rates. The ratio between the amounts of the two species in the reactants and the products will thus change gradually over the course of the reaction, and this gradual change can be treated as follows: Assume that two isotopic molecules, A and A, undergo irreversible competition reactions: The KIE for this scenario is found to be: Where F and F refer to the fraction of conversions for the isotopic species A and A, respectively. Isotopic enrichment of the starting material can be calculated from the dependence of R/R on F for various KIEs, yielding the following figure. Due to the exponential dependence, even very low KIEs lead to large changes in isotopic composition of the starting material at high conversions. When the products are followed, the KIE can be calculated using the products ratio R along with R as follows: Kinetic isotope effect measurement at natural abundance KIE measurement at natural abundance is a simple general method for measuring KIEs for chemical reactions performed with materials of natural abundance. This technique for measuring KIEs overcomes many limitations of previous KIE measurement methods. KIE measurements from isotopically labeled materials require a new synthesis for each isotopically labeled material (a process often prohibitively difficult), a competition reaction, and an analysis. The KIE measurement at natural abundance avoids these issues by taking advantage of high precision quantitative techniques (nuclear magnetic resonance spectroscopy, isotope-ratio mass spectrometry) to site selectively measure kinetic fractionation of isotopes, in either product or starting material for a given chemical reaction. Single-pulse NMR Quantitative single-pulse nuclear magnetic resonance spectroscopy (NMR) is a method amenable for measuring kinetic fractionation of isotopes for natural abundance KIE measurements. Pascal et al. were inspired by studies demonstrating dramatic variations of deuterium within identical compounds from different sources and hypothesized that NMR could be used to measure H KIEs at natural abundance. Pascal and coworkers tested their hypothesis by studying the insertion reaction of dimethyl diazomalonate into cyclohexane. Pascal et al. measured a KIE of 2.2 using H NMR for materials of natural abundance. Singleton and coworkers demonstrated the capacity of C NMR based natural abundance KIE measurements for studying the mechanism of the [4 + 2] cycloaddition of isoprene with maleic anhydride. Previous studies by Gajewski on isotopically enrich materials observed KIE results that suggested an asynchronous transition state, but were always consistent, within error, for a perfectly synchronous reaction mechanism. This work by Singleton et al. established the measurement of multiple C KIEs within the design of a single experiment. These H and C KIE measurements determined at natural abundance found the "inside" hydrogens of the diene experience a more pronounced H KIE than the "outside" hydrogens and the C1 and C4 experience a significant KIE. These key observations suggest an asynchronous reaction mechanism for the cycloaddition of isoprene with maleic anhydride. The limitations for determining KIEs at natural abundance using NMR are that the recovered material must have a suitable amount and purity for NMR analysis (the signal of interest should be distinct from other signals), the reaction of interest must be irreversible, and the reaction mechanism must not change for the duration of the chemical reaction. Experimental details for using quantitative single pulse NMR to measure KIE at natural abundance as follows: the experiment needs to be performed under quantitative conditions including a relaxation time of 5 T, measured 90° flip angle, a digital resolution of at least 5 points across a peak, and a signal:noise greater than 250. The raw FID is zero-filled to at least 256K points before the Fourier transform. NMR spectra are phased and then treated with a zeroth order baseline correction without any tilt correction. Signal integrations are determined numerically with a minimal tolerance for each integrated signal. Organometallic reaction mechanism elucidation examples Colletto et al. developed a regioselective β-arylation of benzo[b]thiophenes at room temperature with aryl iodides as coupling partners and sought to understand the mechanism of this reaction by performing natural abundance KIE measurements via single pulse NMR. The observation of a primary C isotope effect at C3, an inverse H isotope effect, a secondary C isotope effect at C2, and the lack of a H isotope effect at C2; led Colletto et al. to suggest a Heck-type reaction mechanism for the regioselective -arylation of benzo[b]thiophenes at room temperature with aryl iodides as coupling partners. Frost et al. sought to understand the effects of Lewis acid additives on the mechanism of enantioselective palladium-catalyzed C-N bond activation using natural abundance KIE measurements via single pulse NMR. The primary C KIE observed in the absence of BPh suggests a reaction mechanism with rate limiting cis oxidation into the C–CN bond of the cyanoformamide. The addition of BPh causes a relative decrease in the observed C KIE which led Frost et al. to suggest a change in the rate limiting step from cis oxidation to coordination of palladium to the cyanoformamide. DEPT-55 NMR Though KIE measurements at natural abundance are a powerful tool for understanding reaction mechanisms, the amounts of material needed for analysis can make this technique inaccessible for reactions that use expensive reagents or unstable starting materials. To mitigate these limitations, Jacobsen and coworkers developed H to C polarization transfer as a means to reduce the time and material required for KIE measurements at natural abundance. The distortionless enhancement by polarization transfer (DEPT) takes advantage of the larger gyromagnetic ratio of H over C, to theoretically improve measurement sensitivity by a factor of 4 or decrease experiment time by a factor of 16. This method for natural abundance kinetic isotope measurement is favorable for analysis for reactions containing unstable starting materials, and catalysts or products that are relatively costly. Jacobsen and coworkers identified the thiourea-catalyzed glycosylation of galactose as a reaction that met both of the aforementioned criteria (expensive materials and unstable substrates) and was a reaction with a poorly understood mechanism. Glycosylation is a special case of nucleophilic substitution that lacks clear definition between S1 and S2 mechanistic character. The presence of the oxygen adjacent to the site of displacement (i.e., C1) can stabilize positive charge. This charge stabilization can cause any potential concerted pathway to become asynchronous and approaches intermediates with oxocarbenium character of the S1 mechanism for glycosylation. Jacobsen and coworkers observed small normal KIEs at C1, C2, and C5 which suggests significant oxocarbenium character in the transition state and an asynchronous reaction mechanism with a large degree of charge separation. Isotope-ratio mass spectrometry High precision isotope-ratio mass spectrometry (IRMS) is another method for measuring kinetic fractionation of isotopes for natural abundance KIE measurements. Widlanski and coworkers demonstrated S KIE at natural abundance measurements for the hydrolysis of sulfate monoesters. Their observation of a large KIE suggests S-O bond cleavage is rate controlling and likely rules out an associate reaction mechanism. The major limitation for determining KIEs at natural abundance using IRMS is the required site selective degradation without isotopic fractionation into an analyzable small molecule, a non-trivial task. Case studies Primary hydrogen isotope effects Primary hydrogen KIEs refer to cases in which a bond to the isotopically labeled hydrogen is formed or broken at a rate- and/or product-determining step of a reaction. These are the most commonly measured KIEs, and much of the previously covered theory refers to primary KIEs. When there is adequate evidence that transfer of the labeled hydrogen occurs in the rate-determining step of a reaction, if a fairly large KIE is observed, e.g. k/k of at least 5-6 or k/k about 10–13 at room temperature, it is quite likely that the hydrogen transfer is linear and that the hydrogen is fairly symmetrically located in the transition state. It is usually not possible to make comments about tunneling contributions to the observed isotope effect unless the effect is very large. If the primary KIE is not as large, it is generally considered to be indicative of a significant contribution from heavy-atom motion to the reaction coordinate, though it may also mean that hydrogen transfer follows a nonlinear pathway. Secondary hydrogen isotope effects Secondary hydrogen isotope effects or secondary KIE (SKIE) arise in cases where the isotopic substitution is remote from the bond being broken. The remote atom nonetheless influences the internal vibrations of the system, which via changes in zero-point energy (ZPE) affect the rates of chemical reactions. Such effects are expressed as ratios of rate for the light isotope to that of the heavy isotope and can be "normal" (ratio ≥ 1) or "inverse" (ratio < 1) effects. SKIEs are defined as α,β (etc.) secondary isotope effects where such prefixes refer to the position of the isotopic substitution relative to the reaction center (see alpha and beta carbon). The prefix α refers to the isotope associated with the reaction center and the prefix β refers to the isotope associated with an atom neighboring the reaction center and so on. In physical organic chemistry, SKIE is discussed in terms of electronic effects such as induction, bond hybridization, or hyperconjugation. These properties are determined by electron distribution, and depend upon vibrationally averaged bond length and angles that are not greatly affected by isotopic substitution. Thus, the use of the term "electronic isotope effect" while legitimate is discouraged from use as it can be misinterpreted to suggest that the isotope effect is electronic in nature rather than vibrational. SKIEs can be explained in terms of changes in orbital hybridization. When the hybridization of a carbon atom changes from sp to sp, a number of vibrational modes (stretches, in-plane and out-of-plane bending) are affected. The in-plane and out-of-plane bending in an sp hybridized carbon are similar in frequency due to the symmetry of an sp hybridized carbon. In an sp hybridized carbon the in-plane bend is much stiffer than the out-of-plane bending resulting in a large difference in the frequency, the ZPE and thus the SKIE (which exists when there is a difference in the ZPE of the reactant and transition state). The theoretical maximum change caused by the bending frequency difference has been calculated as 1.4. When carbon undergoes a reaction that changes its hybridization from sp to sp, the out-of-plane bending force constant at the transition state is weaker as it is developing sp character and a "normal" SKIE is observed with typical values of 1.1 to 1.2. Conversely, when carbon's hybridization changes from sp to sp, the out of plane bending force constants at the transition state increase and an inverse SKIE is observed with typical values of 0.8 to 0.9. More generally the SKIE for reversible reactions can be "normal" one way and "inverse" the other if bonding in the transition state is midway in stiffness between substrate and product, or they can be "normal" both ways if bonding is weaker in the transition state, or "inverse" both ways if bonding is stronger in the transition state than in either reactant. An example of an "inverse" α SKIE can be seen in the work of Fitzpatrick and Kurtz who used such an effect to distinguish between two proposed pathways for the reaction of d-amino acid oxidase with nitroalkane anions. Path A involved a nucleophilic attack on the coenzyme flavin adenine dinucleotide (FAD), while path B involves a free-radical intermediate. As path A results in the intermediate carbon changing hybridization from sp to sp an "inverse" SKIE is expected. If path B occurs then no SKIE should be observed as the free radical intermediate does not change hybridization. An SKIE of 0.84 was observed and Path A verified as shown in the scheme below. Another example of SKIE is oxidation of benzyl alcohols by dimethyldioxirane, where three transition states for different mechanisms were proposed. Again, by considering how and if the hydrogen atoms were involved in each, researchers predicted whether or not they would expect an effect of isotopic substitution of them. Then, analysis of the experimental data for the reaction allowed them to choose which pathway was most likely based on the observed isotope effect. Secondary hydrogen isotope effects from the methylene hydrogens were also used to show that Cope rearrangement in 1,5-hexadiene follow a concerted bond rearrangement pathway, and not one of the alternatively proposed allyl radical or 1,4-diyl pathways, all of which are presented in the following scheme. Alternative mechanisms for the Cope rearrangement of 1,5-hexadiene: (from top to bottom), allyl radical, synchronous concerted, and 1,4-dyil pathways. The predominant pathway is found to be the middle one, which has six delocalized π electrons corresponding to an aromatic intermediate. Steric isotope effects The steric isotope effect (SIE) is a SKIE that does not involve bond breaking or formation. This effect is attributed to the different vibrational amplitudes of isotopologues. An example of such an effect is the racemization of 9,10-dihydro-4,5-dimethylphenanthrene. The smaller amplitude of vibration for H than for H in C–H, C–H bonds, results in a smaller van der Waals radius or effective size in addition to a difference in the ZPE between the two. When there is a greater effective bulk of molecules containing one over the other this may be manifested by a steric effect on the rate constant. For the example above, H racemizes faster than H resulting in a SIE. A model for the SIE was developed by Bartell. A SIE is usually small, unless the transformations passes through a transition state with severe steric encumbrance, as in the racemization process shown above. Another example of the SIE is in the deslipping reaction of rotaxanes. H, due to its smaller effective size, allows easier passage of the stoppers through the macrocycle, resulting in faster deslipping for the deuterated rotaxanes. Inverse kinetic isotope effects Reactions are known where the deuterated species reacts faster than the undeuterated one, and these cases are said to exhibit inverse KIEs (IKIE). IKIEs are often observed in the reductive elimination of alkyl metal hydrides, e.g. ((MeNCH))PtMe(H). In such cases the C-D bond in the transition state, an agostic species, is highly stabilized relative to the C–H bond. An inverse effect can also occur in a multistep reaction if the overall rate constant depends on a pre-equilibrium prior to the rate-determining step which has an inverse equilibrium isotope effect. For example, the rates of acid-catalyzed reactions are usually 2-3 times greater for reactions in DO catalyzed by DO than for the analogous reactions in HO catalyzed by HO This can be explained for a mechanism of specific hydrogen-ion catalysis of a reactant R by HO (or DO). HO + R RH + HO RH + HO → HO + P The rate of formation of products is then d[P]/dt = k[RH] = kK[HO][R] = k[HO][R]. In the first step, HO is usually a stronger acid than RH. Deuteration shifts the equilibrium toward the more strongly bound acid species RD in which the effect of deuteration on zero-point vibrational energy is greater, so that the deuterated equilibrium constant K is greater than K. This equilibrium isotope effect in the first step usually outweighs the kinetic isotope effect in the second step, so that there is an apparent inverse isotope effect and the observed overall rate constant k = kK decreases. Solvent hydrogen kinetic isotope effects For the solvent isotope effects to be measurable, a fraction of the solvent must have a different isotopic composition than the rest. Therefore, large amounts of the less common isotopic species must be available, limiting observable solvent isotope effects to isotopic substitutions involving hydrogen. Detectable KIEs occur only when solutes exchange hydrogen with the solvent or when there is a specific solute-solvent interaction near the reaction site. Both such phenomena are common for protic solvents, in which the hydrogen is exchangeable, and they may form dipole-dipole interactions or hydrogen bonds with polar molecules. Carbon-13 isotope effects Most organic reactions involve breaking and making bonds to carbon; thus, it is reasonable to expect detectable carbon isotope effects. When C is used as the label, the change in mass of the isotope is only ~8%, though, which limits the observable KIEs to much smaller values than the ones observable with hydrogen isotope effects. Compensating for variations in C natural abundance Often, the largest source of error in a study that depends on the natural abundance of carbon is the slight variation in natural C abundance itself. Such variations arise; because the starting materials in the reaction, are themselves products of other reactions that have KIEs and thus isotopically enrich the products. To compensate for this error when NMR spectroscopy is used to determine the KIE, the following guidelines have been proposed: Choose a carbon that is remote from the reaction center that will serve as a reference and assume it does not have a KIE in the reaction. In the starting material that has not undergone any reaction, determine the ratios of the other carbon NMR peak integrals to that of the reference carbon. Obtain the same ratios for the carbons in a sample of the starting material after it has undergone some reaction. The ratios of the latter ratios to the former ratios yields R/R. If these as well as some other precautions listed by Jankowski are followed, KIEs with precisions of three decimal places can be achieved. Isotope effects with elements heavier than carbon Interpretation of carbon isotope effects is usually complicated by simultaneously forming and breaking bonds to carbon. Even reactions that involve only bond cleavage from the carbon, such as S1 reactions, involve strengthening of the remaining bonds to carbon. In many such reactions, leaving group isotope effects tend to be easier to interpret. For example, substitution and elimination reactions in which chlorine acts as a leaving group are convenient to interpret, especially since chlorine acts as a monatomic species with no internal bonding to complicate the reaction coordinate, and it has two stable isotopes, Cl and Cl, both with high abundance. The major challenge to the interpretation of such isotope affects is the solvation of the leaving group. Owing to experimental uncertainties, measurement of isotope effect may entail significant uncertainty. Often isotope effects are determined through complementary studies on a series of isotopomers. Accordingly, it is quite useful to combine hydrogen isotope effects with heavy-atom isotope effects. For instance, determining nitrogen isotope effect along with hydrogen isotope effect was used to show that the reaction of 2-phenylethyltrimethylammonium ion with ethoxide in ethanol at 40°C follows an E2 mechanism, as opposed to alternative non-concerted mechanisms. This conclusion was reached upon showing that this reaction yields a nitrogen isotope effect, k/k, of 1.0133±0.0002 along with a hydrogen KIE of 3.2 at the leaving hydrogen. Similarly, combining nitrogen and hydrogen isotope effects was used to show that syn eliminations of simple ammonium salts also follow a concerted mechanism, which was a question of debate before. In the following two reactions of 2-phenylcyclopentyltrimethylammonium ion with ethoxide, both of which yield 1-phenylcyclopentene, both isomers exhibited a nitrogen isotope effect k/k at 60°C. Though the reaction of the trans isomer, which follows syn elimination, has a smaller nitrogen KIE (1.0064) than the cis isomer which undergoes anti elimination (1.0108); both results are large enough to be indicative of weakening of the C-N bond in the transition state that would occur in a concerted process. Other examples Since KIEs arise from differences in isotopic mass, the largest observable KIEs are associated with isotopic substitution of H with H (2x increase in mass) or H (3x increase in mass). KIEs from isotopic mass ratios can be as large as 36.4 using muons. They have produced the lightest "hydrogen" atom, H (0.113 amu), in which an electron orbits a positive muon (μ) "nucleus" that has a mass of 206 electrons. They have also prepared the heaviest "hydrogen" atom by replacing one electron in helium with a negative muon μ to form Heμ (mass 4.116 amu). Since μ is much heavier than an electron, it orbits much closer to the nucleus, effectively shielding one proton, making Heμ behave as H. With these exotic atoms, the reaction of H with H was investigated. Rate constants from reacting the lightest and the heaviest hydrogen analogs with H were then used to calculate k/k, in which there is a 36.4x difference in isotopic mass. For this reaction, isotopic substitution happens to produce an IKIE, and the authors report a KIE as low as 1.74 × 10, which is the smallest KIE ever reported. The KIE leads to a specific distribution of H in natural products, depending on the route they were synthesized in nature. By NMR spectroscopy, it is therefore easy to detect whether the alcohol in wine was fermented from glucose, or from illicitly added saccharose. Another reaction mechanism that was elucidated using the KIE is halogenation of toluene: In this particular "intramolecular KIE" study, a benzylic hydrogen undergoes radical substitution by bromine using N-bromosuccinimide as the brominating agent. It was found that PhCH brominates 4.86x faster than PhCD (PhCH). A large KIE of 5.56 is associated with the reaction of ketones with bromine and sodium hydroxide. In this reaction the rate-limiting step is formation of the enolate by deprotonation of the ketone. In this study the KIE is calculated from the reaction rate constants for regular 2,4-dimethyl-3-pentanone and its deuterated isomer by optical density measurements. In asymmetric catalysis, there are rare cases where a KIE manifests as a significant difference in the enantioselectivity observed for a deuterated substrate compared to a non-deuterated one. One example was reported by Toste and coworkers, in which a deuterated substrate produced an enantioselectivity of 83% ee, compared to 93% ee for the undeuterated substrate. The effect was taken to corroborate additional inter- and intramolecular competition KIE data that suggested cleavage of the C-H/D bond in the enantiodetermining step. Notes See also Crossover experiment (chemistry) Equilibrium constant#Effect of isotopic substitution Isotope effect on lipid peroxidation Kinetic isotope effects of RuBisCO (ribulose-1,5-bisphosphate carboxylase oxygenase) Magnetic isotope effect Reaction mechanism Transient kinetic isotope fractionation Urey–Bigeleisen–Mayer equation References Further reading Organic chemistry Physical organic chemistry Deuterium Chemical kinetics Reaction mechanisms
0.780412
0.98782
0.770906
Environmental hazard
Environmental hazards are those hazards that affect biomes or ecosystems. Well known examples include oil spills, water pollution, slash and burn deforestation, air pollution, ground fissures, and build-up of atmospheric carbon dioxide. Physical exposure to environmental hazards is usually involuntary Types Environmental hazards can be categorized in many different ways. One of them is — chemical, physical, biological, and psychological. Chemical hazards are substances that can cause harm or damage to humans, animals, or the environment. They can be in the form of solids, liquids, gases, mists, dusts, fumes, and vapors. Exposure can occur through inhalation, skin absorption, ingestion, or direct contact. Chemical hazards include substances such as pesticides, solvents, acids, bases, reactive metals, and poisonous gases. Exposure to these substances can result in health effects such as skin irritation, respiratory problems, organ damage, neurological effects, and cancer. Physical hazards are factors within the environment that can harm the body without necessarily touching it. They include a wide range of environmental factors such as noise, vibration, extreme temperatures, radiation, and ergonomic hazards. Physical hazards may lead to injuries like burns, fractures, hearing loss, vision impairment, or other physical harm. They can be present in many work settings such as construction sites, manufacturing plants, and even office spaces. Biological hazards, also known as biohazards, are organic substances that pose a threat to the health of living organisms, primarily humans. This can include medical waste, samples of a microorganism, virus, or toxin (from a biological source) that can impact human health. Biological hazards can also include substances harmful to animals. Examples of biological hazards include bacteria, viruses, fungi, other microorganisms and their associated toxins. They may cause a myriad of diseases, from flu to more serious and potentially fatal diseases. Psychological hazards are aspects of work and work environments that can cause psychological harm or mental ill-health. These include factors such as stress, workplace bullying, fatigue, burnout, and violence, among others. These hazards can lead to psychological issues like anxiety, depression, and post-traumatic stress disorder (PTSD). Psychological hazards can exist in any type of workplace, and their management is a crucial aspect of occupational health and safety. Environmental hazard identification Environmental hazard identification is the first step in environmental risk assessment, which is the process of assessing the likelihood, or risk, of adverse effects resulting from a given environmental stressor. Hazard identification is the determination of whether, and under what conditions, a given environmental stressor has the potential to cause harm. In hazard identification, sources of data on the risks associated with prospective hazards are identified. For instance, if a site is known to be contaminated with a variety of industrial pollutants, hazard identification will determine which of these chemicals could result in adverse human health effects, and what effects they could cause. Risk assessors rely on both laboratory (e.g., toxicological) and epidemiological data to make these determinations.Conceptual model of exposure Hazards have the potential to cause adverse effects only if they come into contact with populations that may be harmed. For this reason, hazard identification includes the development of a conceptual model of exposure. Conceptual models communicate the pathway connecting sources of a given hazard to the potentially exposed population(s). The U.S. Agency for Toxic Substances and Disease Registry establishes five elements that should be included in a conceptual model of exposure: The source of the hazard in question Environmental fate and transport, or how the hazard moves and changes in the environment after its release Exposure point or area, or the place at which an exposed person comes into contact with the hazard Exposure route, or the manner by which an exposed person comes into contact with the hazard (e.g., orally, dermally, or by inhalation) Potentially exposed populations. Evaluating hazard data Once a conceptual model of exposure is developed for a given hazard, measurements should be taken to determine the presence and quantity of the hazard. These measurements should be compared to appropriate reference levels to determine whether a hazard exists. For instance, if arsenic is detected in tap water from a given well, the detected concentrations should be compared with regulatory thresholds for allowable levels of arsenic in drinking water. If the detected levels are consistently lower than these limits, arsenic may not be a chemical of potential concern for the purposes of this risk assessment. When interpreting hazard data, risk assessors must consider the sensitivity of the instrument and method used to take these measurements, including any relevant detection limits (i.e., the lowest level of a given substance that an instrument or method is capable of detecting). Chemical Chemical hazards are defined in the Globally Harmonized System and in the European Union chemical regulations. They are caused by chemical substances causing significant damage to the environment. The label is particularly applicable towards substances with aquatic toxicity. An example is zinc oxide, a common paint pigment, which is extremely toxic to aquatic life. Toxicity or other hazards do not imply an environmental hazard, because elimination by sunlight (photolysis), water (hydrolysis) or organisms (biological elimination) neutralizes many reactive or poisonous substances. Persistence towards these elimination mechanisms combined with toxicity gives the substance the ability to do damage in the long term. Also, the lack of immediate human toxicity does not mean the substance is environmentally nonhazardous. For example, tanker truck-sized spills of substances such as milk can cause a lot of damage in the local aquatic ecosystems: the added biological oxygen demand causes rapid eutrophication, leading to anoxic conditions in the water body. All hazards in this category are mainly anthropogenic although there exist a number of natural carcinogens and chemical elements like radon and lead may turn up in health-critical concentrations in the natural environment: temp break agents in animals destined for human consumption - a contaminant of fresh water sources (water wells) - carcinogenic s s s s s s s in animals destined for human consumption in paint s s s and other natural sources of radioactivity Physical A physical hazard is a type of occupational hazard that involves environmental hazards that can cause harm with or without contact. Below is a list of examples: s Biological Biological hazards, also known as biohazards, refer to biological substances that pose a threat to the health of living organisms, primarily that of humans. This can include medical waste or samples of a microorganism, virus or toxin (from a biological source) that can affect human health. Examples include: , a common allergen (BSE) s s (river blindness) s s (SARS) Psychological Psychological hazards include but are not limited to stress, violence and other workplace stressors. Work is generally beneficial to mental health and personal wellbeing. It provides people with structure and purpose and a sense of identity. See also References Environmental health Hazards Public health
0.775784
0.993679
0.77088
Photochemistry
Photochemistry is the branch of chemistry concerned with the chemical effects of light. Generally, this term is used to describe a chemical reaction caused by absorption of ultraviolet (wavelength from 100 to 400 nm), visible (400–750 nm), or infrared radiation (750–2500 nm). In nature, photochemistry is of immense importance as it is the basis of photosynthesis, vision, and the formation of vitamin D with sunlight. It is also responsible for the appearance of DNA mutations leading to skin cancers. Photochemical reactions proceed differently than temperature-driven reactions. Photochemical paths access high-energy intermediates that cannot be generated thermally, thereby overcoming large activation barriers in a short period of time, and allowing reactions otherwise inaccessible by thermal processes. Photochemistry can also be destructive, as illustrated by the photodegradation of plastics. Concept Grotthuss–Draper law and Stark–Einstein law Photoexcitation is the first step in a photochemical process where the reactant is elevated to a state of higher energy, an excited state. The first law of photochemistry, known as the Grotthuss–Draper law (for chemists Theodor Grotthuss and John W. Draper), states that light must be absorbed by a chemical substance in order for a photochemical reaction to take place. According to the second law of photochemistry, known as the Stark–Einstein law (for physicists Johannes Stark and Albert Einstein), for each photon of light absorbed by a chemical system, no more than one molecule is activated for a photochemical reaction, as defined by the quantum yield. Fluorescence and phosphorescence When a molecule or atom in the ground state (S0) absorbs light, one electron is excited to a higher orbital level. This electron maintains its spin according to the spin selection rule; other transitions would violate the law of conservation of angular momentum. The excitation to a higher singlet state can be from HOMO to LUMO or to a higher orbital, so that singlet excitation states S1, S2, S3... at different energies are possible. Kasha's rule stipulates that higher singlet states would quickly relax by radiationless decay or internal conversion (IC) to S1. Thus, S1 is usually, but not always, the only relevant singlet excited state. This excited state S1 can further relax to S0 by IC, but also by an allowed radiative transition from S1 to S0 that emits a photon; this process is called fluorescence. Alternatively, it is possible for the excited state S1 to undergo spin inversion and to generate a triplet excited state T1 having two unpaired electrons with the same spin. This violation of the spin selection rule is possible by intersystem crossing (ISC) of the vibrational and electronic levels of S1 and T1. According to Hund's rule of maximum multiplicity, this T1 state would be somewhat more stable than S1. This triplet state can relax to the ground state S0 by radiationless ISC or by a radiation pathway called phosphorescence. This process implies a change of electronic spin, which is forbidden by spin selection rules, making phosphorescence (from T1 to S0) much slower than fluorescence (from S1 to S0). Thus, triplet states generally have longer lifetimes than singlet states. These transitions are usually summarized in a state energy diagram or Jablonski diagram, the paradigm of molecular photochemistry. These excited species, either S1 or T1, have a half-empty low-energy orbital, and are consequently more oxidizing than the ground state. But at the same time, they have an electron in a high-energy orbital, and are thus more reducing. In general, excited species are prone to participate in electron transfer processes. Experimental setup Photochemical reactions require a light source that emits wavelengths corresponding to an electronic transition in the reactant. In the early experiments (and in everyday life), sunlight was the light source, although it is polychromatic. Mercury-vapor lamps are more common in the laboratory. Low-pressure mercury-vapor lamps mainly emit at 254 nm. For polychromatic sources, wavelength ranges can be selected using filters. Alternatively, laser beams are usually monochromatic (although two or more wavelengths can be obtained using nonlinear optics), and LEDs have a relatively narrow band that can be efficiently used, as well as Rayonet lamps, to get approximately monochromatic beams. The emitted light must reach the targeted functional group without being blocked by the reactor, medium, or other functional groups present. For many applications, quartz is used for the reactors as well as to contain the lamp. Pyrex absorbs at wavelengths shorter than 275 nm. The solvent is an important experimental parameter. Solvents are potential reactants, and for this reason, chlorinated solvents are avoided because the C–Cl bond can lead to chlorination of the substrate. Strongly-absorbing solvents prevent photons from reaching the substrate. Hydrocarbon solvents absorb only at short wavelengths and are thus preferred for photochemical experiments requiring high-energy photons. Solvents containing unsaturation absorb at longer wavelengths and can usefully filter out short wavelengths. For example, cyclohexane and acetone "cut off" (absorb strongly) at wavelengths shorter than 215 and 330 nm, respectively. Typically, the wavelength employed to induce a photochemical process is selected based on the absorption spectrum of the reactive species, most often the absorption maximum. Over the last years, however, it has been demonstrated that, in the majority of bond-forming reactions, the absorption spectrum does not allow selecting the optimum wavelength to achieve the highest reaction yield based on absorptivity. This fundamental mismatch between absorptivity and reactivity has been elucidated with so-called photochemical action plots. Photochemistry in combination with flow chemistry Continuous-flow photochemistry offers multiple advantages over batch photochemistry. Photochemical reactions are driven by the number of photons that are able to activate molecules causing the desired reaction. The large surface-area-to-volume ratio of a microreactor maximizes the illumination, and at the same time allows for efficient cooling, which decreases the thermal side products. Principles In the case of photochemical reactions, light provides the activation energy. Simplistically, light is one mechanism for providing the activation energy required for many reactions. If laser light is employed, it is possible to selectively excite a molecule so as to produce a desired electronic and vibrational state. Equally, the emission from a particular state may be selectively monitored, providing a measure of the population of that state. If the chemical system is at low pressure, this enables scientists to observe the energy distribution of the products of a chemical reaction before the differences in energy have been smeared out and averaged by repeated collisions. The absorption of a photon by a reactant molecule may also permit a reaction to occur not just by bringing the molecule to the necessary activation energy, but also by changing the symmetry of the molecule's electronic configuration, enabling an otherwise-inaccessible reaction path, as described by the Woodward–Hoffmann selection rules. A [2+2] cycloaddition reaction is one example of a pericyclic reaction that can be analyzed using these rules or by the related frontier molecular orbital theory. Some photochemical reactions are several orders of magnitude faster than thermal reactions; reactions as fast as 10−9 seconds and associated processes as fast as 10−15 seconds are often observed. The photon can be absorbed directly by the reactant or by a photosensitizer, which absorbs the photon and transfers the energy to the reactant. The opposite process, when a photoexcited state is deactivated by a chemical reagent, is called quenching. Most photochemical transformations occur through a series of simple steps known as primary photochemical processes. One common example of these processes is the excited state proton transfer. Photochemical reactions Examples of photochemical reactions Photosynthesis: Plants use solar energy to convert carbon dioxide and water into glucose and oxygen. Human formation of vitamin D by exposure to sunlight. Bioluminescence: e.g. In fireflies, an enzyme in the abdomen catalyzes a reaction that produces light. Polymerizations started by photoinitiators, which decompose upon absorbing light to produce the free radicals for radical polymerization. Photodegradation of many substances, e.g. polyvinyl chloride and Fp. Medicine bottles are often made from darkened glass to protect the drugs from photodegradation. Photochemical rearrangements, e.g. photoisomerization, hydrogen atom transfer, and photochemical electrocyclic reactions. Photodynamic therapy: Light is used to destroy tumors by the action of singlet oxygen generated by photosensitized reactions of triplet oxygen. Typical photosensitizers include tetraphenylporphyrin and methylene blue. The resulting singlet oxygen is an aggressive oxidant, capable of converting C–H bonds into C–OH groups. Diazo printing process Photoresist technology, used in the production of microelectronic components. Vision is initiated by a photochemical reaction of rhodopsin. Toray photochemical production of ε-caprolactame. Photochemical production of artemisinin, an anti-malaria drug. Photoalkylation, used for the light-induced addition of alkyl groups to molecules. DNA: photodimerization leading to cyclobutane pyrimidine dimers. Organic photochemistry Examples of photochemical organic reactions are electrocyclic reactions, radical reactions, photoisomerization, and Norrish reactions. Alkenes undergo many important reactions that proceed via a photon-induced π to π* transition. The first electronic excited state of an alkene lacks the π-bond, so that rotation about the C–C bond is rapid and the molecule engages in reactions not observed thermally. These reactions include cis-trans isomerization and cycloaddition to other (ground state) alkene to give cyclobutane derivatives. The cis-trans isomerization of a (poly)alkene is involved in retinal, a component of the machinery of vision. The dimerization of alkenes is relevant to the photodamage of DNA, where thymine dimers are observed upon illuminating DNA with UV radiation. Such dimers interfere with transcription. The beneficial effects of sunlight are associated with the photochemically-induced retro-cyclization (decyclization) reaction of ergosterol to give vitamin D. In the DeMayo reaction, an alkene reacts with a 1,3-diketone reacts via its enol to yield a 1,5-diketone. Still another common photochemical reaction is Howard Zimmerman's di-π-methane rearrangement. In an industrial application, about 100,000 tonnes of benzyl chloride are prepared annually by the gas-phase photochemical reaction of toluene with chlorine. The light is absorbed by chlorine molecules, the low energy of this transition being indicated by the yellowish color of the gas. The photon induces homolysis of the Cl-Cl bond, and the resulting chlorine radical converts toluene to the benzyl radical: Cl2 + hν → 2 Cl· C6H5CH3 + Cl· → C6H5CH2· + HCl C6H5CH2· + Cl· → C6H5CH2Cl Mercaptans can be produced by photochemical addition of hydrogen sulfide (H2S) to alpha olefins. Inorganic and organometallic photochemistry Coordination complexes and organometallic compounds are also photoreactive. These reactions can entail cis-trans isomerization. More commonly, photoreactions result in dissociation of ligands, since the photon excites an electron on the metal to an orbital that is antibonding with respect to the ligands. Thus, metal carbonyls that resist thermal substitution undergo decarbonylation upon irradiation with UV light. UV-irradiation of a THF solution of molybdenum hexacarbonyl gives the THF complex, which is synthetically useful: Mo(CO)6 + THF → Mo(CO)5(THF) + CO In a related reaction, photolysis of iron pentacarbonyl affords diiron nonacarbonyl (see figure): 2 Fe(CO)5 → Fe2(CO)9 + CO Select photoreactive coordination complexes can undergo oxidation-reduction processes via single electron transfer. This electron transfer can occur within the inner or outer coordination sphere of the metal. Types of photochemical reactions Here are some different types of photochemical reactions- Photo-dissociation: AB + hν → A* + B* Photo induced rearrangements, isomerization: A + hν → B Photo-addition: A + B + hν → AB + C Photo-substitution: A + BC + hν → AB + C Photo-redox reaction: A + B + hν → A− + B+ Historical Although bleaching has long been practiced, the first photochemical reaction was described by Trommsdorff in 1834. He observed that crystals of the compound α-santonin when exposed to sunlight turned yellow and burst. In a 2007 study the reaction was described as a succession of three steps taking place within a single crystal. The first step is a rearrangement reaction to a cyclopentadienone intermediate (2), the second one a dimerization in a Diels–Alder reaction (3), and the third one an intramolecular [2+2]cycloaddition (4). The bursting effect is attributed to a large change in crystal volume on dimerization. Specialized journals Journal of Photochemistry and Photobiology ChemPhotoChem Photochemistry and Photobiology Photochemical & Photobiological Sciences Photochemistry Learned societies Inter-American Photochemical Society European Photochemistry Association Asian and Oceanian Photochemistry Association International conferences IUPAC SYmposium on Photochemistry (biennial) International Conference on Photochemitry (biennial) The organization of these conferences is facilitated by the International Foundation for Photochemistry. See also Photonic molecule Photoelectrochemical cell Photochemical logic gate Photosynthesis Light-dependent reactions List of photochemists Single photon sources Photogeochemistry Photoelectric effect Photolysis Blueprint References Further reading Bowen, E. J., Chemical Aspects of Light. Oxford: The Clarendon Press, 1942. 2nd edition, 1946. Photochemistry Chemistry
0.778932
0.989661
0.770879
Ecosystem ecology
Ecosystem ecology is the integrated study of living (biotic) and non-living (abiotic) components of ecosystems and their interactions within an ecosystem framework. This science examines how ecosystems work and relates this to their components such as chemicals, bedrock, soil, plants, and animals. Ecosystem ecology examines physical and biological structures and examines how these ecosystem characteristics interact with each other. Ultimately, this helps us understand how to maintain high quality water and economically viable commodity production. A major focus of ecosystem ecology is on functional processes, ecological mechanisms that maintain the structure and services produced by ecosystems. These include primary productivity (production of biomass), decomposition, and trophic interactions. Studies of ecosystem function have greatly improved human understanding of sustainable production of forage, fiber, fuel, and provision of water. Functional processes are mediated by regional-to-local level climate, disturbance, and management. Thus ecosystem ecology provides a powerful framework for identifying ecological mechanisms that interact with global environmental problems, especially global warming and degradation of surface water. This example demonstrates several important aspects of ecosystems: Ecosystem boundaries are often nebulous and may fluctuate in time Organisms within ecosystems are dependent on ecosystem level biological and physical processes Adjacent ecosystems closely interact and often are interdependent for maintenance of community structure and functional processes that maintain productivity and biodiversity These characteristics also introduce practical problems into natural resource management. Who will manage which ecosystem? Will timber cutting in the forest degrade recreational fishing in the stream? These questions are difficult for land managers to address while the boundary between ecosystems remains unclear; even though decisions in one ecosystem will affect the other. We need better understanding of the interactions and interdependencies of these ecosystems and the processes that maintain them before we can begin to address these questions. Ecosystem ecology is an inherently interdisciplinary field of study. An individual ecosystem is composed of populations of organisms, interacting within communities, and contributing to the cycling of nutrients and the flow of energy. The ecosystem is the principal unit of study in ecosystem ecology. Population, community, and physiological ecology provide many of the underlying biological mechanisms influencing ecosystems and the processes they maintain. Flowing of energy and cycling of matter at the ecosystem level are often examined in ecosystem ecology, but, as a whole, this science is defined more by subject matter than by scale. Ecosystem ecology approaches organisms and abiotic pools of energy and nutrients as an integrated system which distinguishes it from associated sciences such as biogeochemistry. Biogeochemistry and hydrology focus on several fundamental ecosystem processes such as biologically mediated chemical cycling of nutrients and physical-biological cycling of water. Ecosystem ecology forms the mechanistic basis for regional or global processes encompassed by landscape-to-regional hydrology, global biogeochemistry, and earth system science. History Ecosystem ecology is philosophically and historically rooted in terrestrial ecology. The ecosystem concept has evolved rapidly during the last 100 years with important ideas developed by Frederic Clements, a botanist who argued for specific definitions of ecosystems and that physiological processes were responsible for their development and persistence. Although most of Clements ecosystem definitions have been greatly revised, initially by Henry Gleason and Arthur Tansley, and later by contemporary ecologists, the idea that physiological processes are fundamental to ecosystem structure and function remains central to ecology. Later work by Eugene Odum and Howard T. Odum quantified flows of energy and matter at the ecosystem level, thus documenting the general ideas proposed by Clements and his contemporary Charles Elton. In this model, energy flows through the whole system were dependent on biotic and abiotic interactions of each individual component (species, inorganic pools of nutrients, etc.). Later work demonstrated that these interactions and flows applied to nutrient cycles, changed over the course of succession, and held powerful controls over ecosystem productivity. Transfers of energy and nutrients are innate to ecological systems regardless of whether they are aquatic or terrestrial. Thus, ecosystem ecology has emerged from important biological studies of plants, animals, terrestrial, aquatic, and marine ecosystems. Ecosystem services Ecosystem services are ecologically mediated functional processes essential to sustaining healthy human societies. Water provision and filtration, production of biomass in forestry, agriculture, and fisheries, and removal of greenhouse gases such as carbon dioxide (CO2) from the atmosphere are examples of ecosystem services essential to public health and economic opportunity. Nutrient cycling is a process fundamental to agricultural and forest production. However, like most ecosystem processes, nutrient cycling is not an ecosystem characteristic which can be “dialed” to the most desirable level. Maximizing production in degraded systems is an overly simplistic solution to the complex problems of hunger and economic security. For instance, intensive fertilizer use in the midwestern United States has resulted in degraded fisheries in the Gulf of Mexico. Regrettably, a “Green Revolution” of intensive chemical fertilization has been recommended for agriculture in developed and developing countries. These strategies risk alteration of ecosystem processes that may be difficult to restore, especially when applied at broad scales without adequate assessment of impacts. Ecosystem processes may take many years to recover from significant disturbance. For instance, large-scale forest clearance in the northeastern United States during the 18th and 19th centuries has altered soil texture, dominant vegetation, and nutrient cycling in ways that impact forest productivity in the present day. An appreciation of the importance of ecosystem function in maintenance of productivity, whether in agriculture or forestry, is needed in conjunction with plans for restoration of essential processes. Improved knowledge of ecosystem function will help to achieve long-term sustainability and stability in the poorest parts of the world. Operation Biomass productivity is one of the most apparent and economically important ecosystem functions. Biomass accumulation begins at the cellular level via photosynthesis. Photosynthesis requires water and consequently global patterns of annual biomass production are correlated with annual precipitation. Amounts of productivity are also dependent on the overall capacity of plants to capture sunlight which is directly correlated with plant leaf area and N content. Net primary productivity (NPP) is the primary measure of biomass accumulation within an ecosystem. Net primary productivity can be calculated by a simple formula where the total amount of productivity is adjusted for total productivity losses through maintenance of biological processes: NPP = GPP – Rproducer Where GPP is gross primary productivity and Rproducer is photosynthate (Carbon) lost via cellular respiration. NPP is difficult to measure but a new technique known as eddy co-variance has shed light on how natural ecosystems influence the atmosphere. Figure 4 shows seasonal and annual changes in CO2 concentration measured at Mauna Loa, Hawaii from 1987 to 1990. CO2 concentration steadily increased, but within-year variation has been greater than the annual increase since measurements began in 1957. These variations were thought to be due to seasonal uptake of CO2 during summer months. A newly developed technique for assessing ecosystem NPP has confirmed seasonal variation are driven by seasonal changes in CO2 uptake by vegetation. This has led many scientists and policy makers to speculate that ecosystems can be managed to ameliorate problems with global warming. This type of management may include reforesting or altering forest harvest schedules for many parts of the world. Decomposition and nutrient cycling Decomposition and nutrient cycling are fundamental to ecosystem biomass production. Most natural ecosystems are nitrogen (N) limited and biomass production is closely correlated with N turnover. Typically external input of nutrients is very low and efficient recycling of nutrients maintains productivity. Decomposition of plant litter accounts for the majority of nutrients recycled through ecosystems (Figure 3). Rates of plant litter decomposition are highly dependent on litter quality; high concentration of phenolic compounds, especially lignin, in plant litter has a retarding effect on litter decomposition. More complex C compounds are decomposed more slowly and may take many years to completely breakdown. Decomposition is typically described with exponential decay and has been related to the mineral concentrations, especially manganese, in the leaf litter. Globally, rates of decomposition are mediated by litter quality and climate. Ecosystems dominated by plants with low-lignin concentration often have rapid rates of decomposition and nutrient cycling (Chapin et al. 1982). Simple carbon (C) containing compounds are preferentially metabolized by decomposer microorganisms which results in rapid initial rates of decomposition, see Figure 5A, models that depend on constant rates of decay; so called “k” values, see Figure 5B. In addition to litter quality and climate, the activity of soil fauna is very important However, these models do not reflect simultaneous linear and non-linear decay processes which likely occur during decomposition. For instance, proteins, sugars and lipids decompose exponentially, but lignin decays at a more linear rate Thus, litter decay is inaccurately predicted by simplistic models. A simple alternative model presented in Figure 5C shows significantly more rapid decomposition that the standard model of figure 4B. Better understanding of decomposition models is an important research area of ecosystem ecology because this process is closely tied to nutrient supply and the overall capacity of ecosystems to sequester CO2 from the atmosphere. Trophic dynamics Trophic dynamics refers to process of energy and nutrient transfer between organisms. Trophic dynamics is an important part of the structure and function of ecosystems. Figure 3 shows energy transferred for an ecosystem at Silver Springs, Florida. Energy gained by primary producers (plants, P) is consumed by herbivores (H), which are consumed by carnivores (C), which are themselves consumed by “top- carnivores”(TC). One of the most obvious patterns in Figure 3 is that as one moves up to higher trophic levels (i.e. from plants to top-carnivores) the total amount of energy decreases. Plants exert a “bottom-up” control on the energy structure of ecosystems by determining the total amount of energy that enters the system. However, predators can also influence the structure of lower trophic levels from the top-down. These influences can dramatically shift dominant species in terrestrial and marine systems The interplay and relative strength of top-down vs. bottom-up controls on ecosystem structure and function is an important area of research in the greater field of ecology. Trophic dynamics can strongly influence rates of decomposition and nutrient cycling in time and in space. For example, herbivory can increase litter decomposition and nutrient cycling via direct changes in litter quality and altered dominant vegetation. Insect herbivory has been shown to increase rates of decomposition and nutrient turnover due to changes in litter quality and increased frass inputs. However, insect outbreak does not always increase nutrient cycling. Stadler showed that C rich honeydew produced during aphid outbreak can result in increased N immobilization by soil microbes thus slowing down nutrient cycling and potentially limiting biomass production. North atlantic marine ecosystems have been greatly altered by overfishing of cod. Cod stocks crashed in the 1990s which resulted in increases in their prey such as shrimp and snow crab Human intervention in ecosystems has resulted in dramatic changes to ecosystem structure and function. These changes are occurring rapidly and have unknown consequences for economic security and human well-being. Applications and importance Lessons from two Central American cities The biosphere has been greatly altered by the demands of human societies. Ecosystem ecology plays an important role in understanding and adapting to the most pressing current environmental problems. Restoration ecology and ecosystem management are closely associated with ecosystem ecology. Restoring highly degraded resources depends on integration of functional mechanisms of ecosystems. Without these functions intact, economic value of ecosystems is greatly reduced and potentially dangerous conditions may develop in the field. For example, areas within the mountainous western highlands of Guatemala are more susceptible to catastrophic landslides and crippling seasonal water shortages due to loss of forest resources. In contrast, cities such as Totonicapán that have preserved forests through strong social institutions have greater local economic stability and overall greater human well-being. This situation is striking considering that these areas are close to each other, the majority of inhabitants are of Mayan descent, and the topography and overall resources are similar. This is a case of two groups of people managing resources in fundamentally different ways. Ecosystem ecology provides the basic science needed to avoid degradation and to restore ecosystem processes that provide for basic human needs. See also Biogeochemistry Community ecology Earth system science Holon (philosophy) Landscape ecology Systems ecology MuSIASEM References Systems ecology Global natural environment Ecological processes Ecosystems
0.78747
0.978849
0.770814
Homophily
Homophily is a concept in sociology describing the tendency of individuals to associate and bond with similar others, as in the proverb "". The presence of homophily has been discovered in a vast array of network studies: over have observed homophily in some form or another, and they establish that similarity is associated with connection. The categories on which homophily occurs include age, gender, class, and organizational role. The opposite of homophily is heterophily or intermingling. Individuals in homophilic relationships share common characteristics (beliefs, values, education, etc.) that make communication and relationship formation easier. Homophily between mated pairs in animals has been extensively studied in the field of evolutionary biology, where it is known as assortative mating. Homophily between mated pairs is common within natural animal mating populations. Homophily has a variety of consequences for social and economic outcomes. Types and dimensions Baseline vs. inbreeding To test the relevance of homophily, researchers have distinguished between two types: Baseline homophily: simply the amount of homophily that would be expected by chance given an existing uneven distribution of people with varying characteristics; and Inbreeding homophily: the amount of homophily over and above this expected value, typically due to personal preferences and choices. Status vs. value In their original formulation of homophily, Paul Lazarsfeld and Robert K. Merton (1954) distinguished between status homophily and value homophily; individuals with similar social status characteristics were more likely to associate with each other than by chance: Status homophily: includes both society-ascribed characteristics (e.g. race, ethnicity, sex, and age) and acquired characteristics (e.g., religion, occupation, behavior patterns, and education). Value homophily: involves association with others who have similar values, attitudes, and beliefs, regardless of differences in status characteristics. Dimensions Race and ethnicity Social networks in the United States today are strongly divided by race and ethnicity, which account for a large proportion of inbreeding homophily (though classification by these criteria can be problematic in sociology due to fuzzy boundaries and different definitions of race). Smaller groups have lower diversity simply due to the number of members. This tends to give racial and ethnic minority groups a higher baseline homophily. Race and ethnicity also correlates with educational attainment and occupation, which further increase baseline homophily. Sex and gender In terms of sex and gender, the baseline homophily networks were relatively low compared to race and ethnicity. In this form of homophily men and women frequently live together and have large populations that are normally equal in size. It is also common to find higher levels of gender homophily among school students. Most sex homophily are a result of inbreeding homophily. Age Most age homophily is of the baseline type. An interesting pattern of inbreeding age homophily for groups of different ages was found by Marsden (1988). It indicated a strong relationship between someone's age and the social distance to other people with regard to confiding in someone. For example, the larger age gap someone had, the smaller chances that they were confided by others with lower ages to "discuss important matters." Religion Homophily based on religion is due to both baseline and inbreeding homophily. Those that belong in the same religion are more likely to exhibit acts of service and aid to one another, such as loaning money, giving therapeutic counseling, and other forms of help during moments of emergency. Parents have been shown to have higher levels of religious homophily than nonparent, which supports the notion that religious institutions are sought out for the benefit of children. Education, occupation and social class Family of birth accounts for considerable baseline homophily with respect to education, occupation, and social class. In terms of education, there is a divide among those who have a college education and those who do not. Another major distinction can be seen between those with white collar occupations and blue collar occupations. Interests Homophily occurs within groups of people that have similar interests as well. We enjoy interacting more with individuals who share similarities with us, so we tend to actively seek out these connections. Additionally, as more users begin to rely on the Internet to find like minded communities for themselves, many examples of niches within social media sites have begun appearing to account for this need. This response has led to the popularity of sites like Reddit in the 2010s, advertising itself as a "home to thousands of communities... and authentic human interaction." Social media As social networks are largely divided by race, social-networking websites like Facebook also foster homophilic atmospheres. When a Facebook user 'likes' or interacts with an article or post of a certain ideology, Facebook continues to show that user posts of that similar ideology (which Facebook believes they will be drawn to). In a research article, McPherson, Smith-Lovin, and Cook (2003) write that homogeneous personal networks result in limited "social worlds in a way that has powerful implications for the information they receive, the attitudes they form, and the interactions they experience." This homophily can foster divides and echo chambers on social networking sites, where people of similar ideologies only interact with each other. Causes and effects Causes Geography: Baseline homophily often arises when the people who are located nearby also have similar characteristics. People are more likely to have contact with those who are geographically closer than those who are distant. Technology such as the telephone, e-mail, and social networks have reduced but not eliminated this effect. Family ties: These ties decay slowly, but familial ties, specifically that of domestic partners, fulfill many requisites that generate homophily. Family relationships are generally close and keep frequent contact though they may be at great geographic distances. Ideas that may get lost in other relational contexts, will often instead lead to actions in this setting. Organizations: School, work, and volunteer activities provide the great majority of non-family ties. Many friendships, confiding relations, and social support ties are formed within voluntary groups. The social homogeneity of most organizations creates a strong baseline homophily in networks that are formed there. Isomorphic sources: The connections between people who occupy equivalent roles will induce homophily in the system of network ties. This is common in three domains: workplace (e.g., all heads of HR departments will tend to associate with other HR heads), family (e.g., mothers tend to associate with other mothers), and informal networks. Cognitive processes: People who have demographic similarity tend to own shared knowledge, and therefore they have a greater ease of communication and share cultural tastes, which can also generate homophily. Effects According to one study, perception of interpersonal similarity improves coordination and increase the expected payoff of interactions, above and beyond the effect of merely "liking others." Another study claims that homophily produces tolerance and cooperation in social spaces. However, homophilic patterns can also restrict access to information or inclusion for minorities. Nowadays, the restrictive patterns of homophily can be widely seen within social media. This selectiveness within social media networks can be traced back to the origins of Facebook and the transition of users from MySpace to Facebook in the early 2000s. One study of this shift in a network's user base from (2011) found that this perception of homophily impacted many individuals' preference of one site over another. Most users chose to be more active on the site their friends were on. However, along with the complexities of belongingness, people of similar ages, economic class, and prospective futures (higher education and/or career plans) shared similar reasons for favoring one social media platform. The different features of homophily affected their outlook of each respective site. The effects of homophily on the diffusion of information and behaviors are also complex. Some studies have claimed that homophily facilitates access information, the diffusion of innovations and behaviors, and the formation of social norms. Other studies, however, highlight mechanisms through which homophily can maintain disagreement, exacerbate polarization of opinions, lead to self segregation between groups, and slow the formation of an overall consensus. As online users have a degree of power to form and dictate the environment, the effects of homophily continue to persist. On Twitter, terms such as "stan Twitter", "Black Twitter", or "local Twitter" have also been created and popularized by users to separate themselves based on specific dimensions. Homophily is a cause of homogamy—marriage between people with similar characteristics. Homophily is a fertility factor; an increased fertility is seen in people with a tendency to seek acquaintance among those with common characteristics. Governmental family policies have a decreased influence on fertility rates in such populations. See also Groupthink Echo chamber (media) References Interpersonal relationships Sociological terminology
0.78001
0.988145
0.770764
Markov chain
A Markov chain or Markov process is a stochastic process describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. Informally, this may be thought of as, "What happens next depends only on the state of affairs now." A countably infinite sequence, in which the chain moves state at discrete time steps, gives a discrete-time Markov chain (DTMC). A continuous-time process is called a continuous-time Markov chain (CTMC). Markov processes are named in honor of the Russian mathematician Andrey Markov. Markov chains have many applications as statistical models of real-world processes. They provide the basis for general stochastic simulation methods known as Markov chain Monte Carlo, which are used for simulating sampling from complex probability distributions, and have found application in areas including Bayesian statistics, biology, chemistry, economics, finance, information theory, physics, signal processing, and speech processing. The adjectives Markovian and Markov are used to describe something that is related to a Markov process. Principles Definition A Markov process is a stochastic process that satisfies the Markov property (sometimes characterized as "memorylessness"). In simpler terms, it is a process for which predictions can be made regarding future outcomes based solely on its present state and—most importantly—such predictions are just as good as the ones that could be made knowing the process's full history. In other words, conditional on the present state of the system, its future and past states are independent. A Markov chain is a type of Markov process that has either a discrete state space or a discrete index set (often representing time), but the precise definition of a Markov chain varies. For example, it is common to define a Markov chain as a Markov process in either discrete or continuous time with a countable state space (thus regardless of the nature of time), but it is also common to define a Markov chain as having discrete time in either countable or continuous state space (thus regardless of the state space). Types of Markov chains The system's state space and time parameter index need to be specified. The following table gives an overview of the different instances of Markov processes for different levels of state space generality and for discrete time v. continuous time: Note that there is no definitive agreement in the literature on the use of some of the terms that signify special cases of Markov processes. Usually the term "Markov chain" is reserved for a process with a discrete set of times, that is, a discrete-time Markov chain (DTMC), but a few authors use the term "Markov process" to refer to a continuous-time Markov chain (CTMC) without explicit mention. In addition, there are other extensions of Markov processes that are referred to as such but do not necessarily fall within any of these four categories (see Markov model). Moreover, the time index need not necessarily be real-valued; like with the state space, there are conceivable processes that move through index sets with other mathematical constructs. Notice that the general state space continuous-time Markov chain is general to such a degree that it has no designated term. While the time parameter is usually discrete, the state space of a Markov chain does not have any generally agreed-on restrictions: the term may refer to a process on an arbitrary state space. However, many applications of Markov chains employ finite or countably infinite state spaces, which have a more straightforward statistical analysis. Besides time-index and state-space parameters, there are many other variations, extensions and generalizations (see Variations). For simplicity, most of this article concentrates on the discrete-time, discrete state-space case, unless mentioned otherwise. Transitions The changes of state of the system are called transitions. The probabilities associated with various state changes are called transition probabilities. The process is characterized by a state space, a transition matrix describing the probabilities of particular transitions, and an initial state (or initial distribution) across the state space. By convention, we assume all possible states and transitions have been included in the definition of the process, so there is always a next state, and the process does not terminate. A discrete-time random process involves a system which is in a certain state at each step, with the state changing randomly between steps. The steps are often thought of as moments in time, but they can equally well refer to physical distance or any other discrete measurement. Formally, the steps are the integers or natural numbers, and the random process is a mapping of these to states. The Markov property states that the conditional probability distribution for the system at the next step (and in fact at all future steps) depends only on the current state of the system, and not additionally on the state of the system at previous steps. Since the system changes randomly, it is generally impossible to predict with certainty the state of a Markov chain at a given point in the future. However, the statistical properties of the system's future can be predicted. In many applications, it is these statistical properties that are important. History Andrey Markov studied Markov processes in the early 20th century, publishing his first paper on the topic in 1906. Markov Processes in continuous time were discovered long before his work in the early 20th century in the form of the Poisson process. Markov was interested in studying an extension of independent random sequences, motivated by a disagreement with Pavel Nekrasov who claimed independence was necessary for the weak law of large numbers to hold. In his first paper on Markov chains, published in 1906, Markov showed that under certain conditions the average outcomes of the Markov chain would converge to a fixed vector of values, so proving a weak law of large numbers without the independence assumption, which had been commonly regarded as a requirement for such mathematical laws to hold. Markov later used Markov chains to study the distribution of vowels in Eugene Onegin, written by Alexander Pushkin, and proved a central limit theorem for such chains. In 1912 Henri Poincaré studied Markov chains on finite groups with an aim to study card shuffling. Other early uses of Markov chains include a diffusion model, introduced by Paul and Tatyana Ehrenfest in 1907, and a branching process, introduced by Francis Galton and Henry William Watson in 1873, preceding the work of Markov. After the work of Galton and Watson, it was later revealed that their branching process had been independently discovered and studied around three decades earlier by Irénée-Jules Bienaymé. Starting in 1928, Maurice Fréchet became interested in Markov chains, eventually resulting in him publishing in 1938 a detailed study on Markov chains. Andrey Kolmogorov developed in a 1931 paper a large part of the early theory of continuous-time Markov processes. Kolmogorov was partly inspired by Louis Bachelier's 1900 work on fluctuations in the stock market as well as Norbert Wiener's work on Einstein's model of Brownian movement. He introduced and studied a particular set of Markov processes known as diffusion processes, where he derived a set of differential equations describing the processes. Independent of Kolmogorov's work, Sydney Chapman derived in a 1928 paper an equation, now called the Chapman–Kolmogorov equation, in a less mathematically rigorous way than Kolmogorov, while studying Brownian movement. The differential equations are now called the Kolmogorov equations or the Kolmogorov–Chapman equations. Other mathematicians who contributed significantly to the foundations of Markov processes include William Feller, starting in 1930s, and then later Eugene Dynkin, starting in the 1950s. Examples Random walks based on integers and the gambler's ruin problem are examples of Markov processes. Some variations of these processes were studied hundreds of years earlier in the context of independent variables. Two important examples of Markov processes are the Wiener process, also known as the Brownian motion process, and the Poisson process, which are considered the most important and central stochastic processes in the theory of stochastic processes. These two processes are Markov processes in continuous time, while random walks on the integers and the gambler's ruin problem are examples of Markov processes in discrete time. A famous Markov chain is the so-called "drunkard's walk", a random walk on the number line where, at each step, the position may change by +1 or −1 with equal probability. From any position there are two possible transitions, to the next or previous integer. The transition probabilities depend only on the current position, not on the manner in which the position was reached. For example, the transition probabilities from 5 to 4 and 5 to 6 are both 0.5, and all other transition probabilities from 5 are 0. These probabilities are independent of whether the system was previously in 4 or 6. A series of independent states (for example, a series of coin flips) satisfies the formal definition of a Markov chain. However, the theory is usually applied only when the probability distribution of the next state depends on the current one. A non-Markov example Suppose that there is a coin purse containing five quarters (each worth 25¢), five dimes (each worth 10¢), and five nickels (each worth 5¢), and one by one, coins are randomly drawn from the purse and are set on a table. If represents the total value of the coins set on the table after draws, with , then the sequence is not a Markov process. To see why this is the case, suppose that in the first six draws, all five nickels and a quarter are drawn. Thus . If we know not just , but the earlier values as well, then we can determine which coins have been drawn, and we know that the next coin will not be a nickel; so we can determine that with probability 1. But if we do not know the earlier values, then based only on the value we might guess that we had drawn four dimes and two nickels, in which case it would certainly be possible to draw another nickel next. Thus, our guesses about are impacted by our knowledge of values prior to . However, it is possible to model this scenario as a Markov process. Instead of defining to represent the total value of the coins on the table, we could define to represent the count of the various coin types on the table. For instance, could be defined to represent the state where there is one quarter, zero dimes, and five nickels on the table after 6 one-by-one draws. This new model could be represented by possible states, where each state represents the number of coins of each type (from 0 to 5) that are on the table. (Not all of these states are reachable within 6 draws.) Suppose that the first draw results in state . The probability of achieving now depends on ; for example, the state is not possible. After the second draw, the third draw depends on which coins have so far been drawn, but no longer only on the coins that were drawn for the first state (since probabilistically important information has since been added to the scenario). In this way, the likelihood of the state depends exclusively on the outcome of the state. Formal definition Discrete-time Markov chain A discrete-time Markov chain is a sequence of random variables X1, X2, X3, ... with the Markov property, namely that the probability of moving to the next state depends only on the present state and not on the previous states: if both conditional probabilities are well defined, that is, if The possible values of Xi form a countable set S called the state space of the chain. Variations Time-homogeneous Markov chains are processes where for all n. The probability of the transition is independent of n. Stationary Markov chains are processes where for all n and k. Every stationary chain can be proved to be time-homogeneous by Bayes' rule.A necessary and sufficient condition for a time-homogeneous Markov chain to be stationary is that the distribution of is a stationary distribution of the Markov chain. A Markov chain with memory (or a Markov chain of order m) where m is finite, is a process satisfying In other words, the future state depends on the past m states. It is possible to construct a chain from which has the 'classical' Markov property by taking as state space the ordered m-tuples of X values, i.e., . Continuous-time Markov chain A continuous-time Markov chain (Xt)t ≥ 0 is defined by a finite or countable state space S, a transition rate matrix Q with dimensions equal to that of the state space and initial probability distribution defined on the state space. For i ≠ j, the elements qij are non-negative and describe the rate of the process transitions from state i to state j. The elements qii are chosen such that each row of the transition rate matrix sums to zero, while the row-sums of a probability transition matrix in a (discrete) Markov chain are all equal to one. There are three equivalent definitions of the process. Infinitesimal definition Let be the random variable describing the state of the process at time t, and assume the process is in a state i at time t. Then, knowing , is independent of previous values , and as h → 0 for all j and for all t, where is the Kronecker delta, using the little-o notation. The can be seen as measuring how quickly the transition from i to j happens. Jump chain/holding time definition Define a discrete-time Markov chain Yn to describe the nth jump of the process and variables S1, S2, S3, ... to describe holding times in each of the states where Si follows the exponential distribution with rate parameter −qYiYi. Transition probability definition For any value n = 0, 1, 2, 3, ... and times indexed up to this value of n: t0, t1, t2, ... and all states recorded at these times i0, i1, i2, i3, ... it holds that where pij is the solution of the forward equation (a first-order differential equation) with initial condition P(0) is the identity matrix. Finite state space If the state space is finite, the transition probability distribution can be represented by a matrix, called the transition matrix, with the (i, j)th element of P equal to Since each row of P sums to one and all elements are non-negative, P is a right stochastic matrix. Stationary distribution relation to eigenvectors and simplices A stationary distribution is a (row) vector, whose entries are non-negative and sum to 1, is unchanged by the operation of transition matrix P on it and so is defined by By comparing this definition with that of an eigenvector we see that the two concepts are related and that is a normalized multiple of a left eigenvector e of the transition matrix P with an eigenvalue of 1. If there is more than one unit eigenvector then a weighted sum of the corresponding stationary states is also a stationary state. But for a Markov chain one is usually more interested in a stationary state that is the limit of the sequence of distributions for some initial distribution. The values of a stationary distribution are associated with the state space of P and its eigenvectors have their relative proportions preserved. Since the components of π are positive and the constraint that their sum is unity can be rewritten as we see that the dot product of π with a vector whose components are all 1 is unity and that π lies on a simplex. Time-homogeneous Markov chain with a finite state space If the Markov chain is time-homogeneous, then the transition matrix P is the same after each step, so the k-step transition probability can be computed as the k-th power of the transition matrix, Pk. If the Markov chain is irreducible and aperiodic, then there is a unique stationary distribution . Additionally, in this case Pk converges to a rank-one matrix in which each row is the stationary distribution : where 1 is the column vector with all entries equal to 1. This is stated by the Perron–Frobenius theorem. If, by whatever means, is found, then the stationary distribution of the Markov chain in question can be easily determined for any starting distribution, as will be explained below. For some stochastic matrices P, the limit does not exist while the stationary distribution does, as shown by this example: (This example illustrates a periodic Markov chain.) Because there are a number of different special cases to consider, the process of finding this limit if it exists can be a lengthy task. However, there are many techniques that can assist in finding this limit. Let P be an n×n matrix, and define It is always true that Subtracting Q from both sides and factoring then yields where In is the identity matrix of size n, and 0n,n is the zero matrix of size n×n. Multiplying together stochastic matrices always yields another stochastic matrix, so Q must be a stochastic matrix (see the definition above). It is sometimes sufficient to use the matrix equation above and the fact that Q is a stochastic matrix to solve for Q. Including the fact that the sum of each the rows in P is 1, there are n+1 equations for determining n unknowns, so it is computationally easier if on the one hand one selects one row in Q and substitutes each of its elements by one, and on the other one substitutes the corresponding element (the one in the same column) in the vector 0, and next left-multiplies this latter vector by the inverse of transformed former matrix to find Q. Here is one method for doing so: first, define the function f(A) to return the matrix A with its right-most column replaced with all 1's. If [f(P − In)]−1 exists then Explain: The original matrix equation is equivalent to a system of n×n linear equations in n×n variables. And there are n more linear equations from the fact that Q is a right stochastic matrix whose each row sums to 1. So it needs any n×n independent linear equations of the (n×n+n) equations to solve for the n×n variables. In this example, the n equations from “Q multiplied by the right-most column of (P-In)” have been replaced by the n stochastic ones. One thing to notice is that if P has an element Pi,i on its main diagonal that is equal to 1 and the ith row or column is otherwise filled with 0's, then that row or column will remain unchanged in all of the subsequent powers Pk. Hence, the ith row or column of Q will have the 1 and the 0's in the same positions as in P. Convergence speed to the stationary distribution As stated earlier, from the equation (if exists) the stationary (or steady state) distribution is a left eigenvector of row stochastic matrix P. Then assuming that P is diagonalizable or equivalently that P has n linearly independent eigenvectors, speed of convergence is elaborated as follows. (For non-diagonalizable, that is, defective matrices, one may start with the Jordan normal form of P and proceed with a bit more involved set of arguments in a similar way.) Let U be the matrix of eigenvectors (each normalized to having an L2 norm equal to 1) where each column is a left eigenvector of P and let Σ be the diagonal matrix of left eigenvalues of P, that is, Σ = diag(λ1,λ2,λ3,...,λn). Then by eigendecomposition Let the eigenvalues be enumerated such that: Since P is a row stochastic matrix, its largest left eigenvalue is 1. If there is a unique stationary distribution, then the largest eigenvalue and the corresponding eigenvector is unique too (because there is no other which solves the stationary distribution equation above). Let ui be the i-th column of U matrix, that is, ui is the left eigenvector of P corresponding to λi. Also let x be a length n row vector that represents a valid probability distribution; since the eigenvectors ui span we can write If we multiply x with P from right and continue this operation with the results, in the end we get the stationary distribution . In other words, = a1 u1 ← xPP...P = xPk as k → ∞. That means Since is parallel to u1(normalized by L2 norm) and (k) is a probability vector, (k) approaches to a1 u1 = as k → ∞ with a speed in the order of λ2/λ1 exponentially. This follows because hence λ2/λ1 is the dominant term. The smaller the ratio is, the faster the convergence is. Random noise in the state distribution can also speed up this convergence to the stationary distribution. General state space Harris chains Many results for Markov chains with finite state space can be generalized to chains with uncountable state space through Harris chains. The use of Markov chains in Markov chain Monte Carlo methods covers cases where the process follows a continuous state space. Locally interacting Markov chains "Locally interacting Markov chains" are Markov chains with an evolution that takes into account the state of other Markov chains. This corresponds to the situation when the state space has a (Cartesian-) product form. See interacting particle system and stochastic cellular automata (probabilistic cellular automata). See for instance Interaction of Markov Processes or. Properties Two states are said to communicate with each other if both are reachable from one another by a sequence of transitions that have positive probability. This is an equivalence relation which yields a set of communicating classes. A class is closed if the probability of leaving the class is zero. A Markov chain is irreducible if there is one communicating class, the state space. A state has period if is the greatest common divisor of the number of transitions by which can be reached, starting from . That is: The state is periodic if ; otherwise and the state is aperiodic. A state i is said to be transient if, starting from i, there is a non-zero probability that the chain will never return to i. It is called recurrent (or persistent) otherwise. For a recurrent state i, the mean hitting time is defined as: State i is positive recurrent if is finite and null recurrent otherwise. Periodicity, transience, recurrence and positive and null recurrence are class properties — that is, if one state has the property then all states in its communicating class have the property. A state i is called absorbing if there are no outgoing transitions from the state. Irreducibility Since periodicity is a class property, if a Markov chain is irreducible, then all its states have the same period. In particular, if one state is aperiodic, then the whole Markov chain is aperiodic. If a finite Markov chain is irreducible, then all states are positive recurrent, and it has a unique stationary distribution given by . Ergodicity A state i is said to be ergodic if it is aperiodic and positive recurrent. In other words, a state i is ergodic if it is recurrent, has a period of 1, and has finite mean recurrence time. If all states in an irreducible Markov chain are ergodic, then the chain is said to be ergodic. Equivalently, there exists some integer such that all entries of are positive. It can be shown that a finite state irreducible Markov chain is ergodic if it has an aperiodic state. More generally, a Markov chain is ergodic if there is a number N such that any state can be reached from any other state in any number of steps less or equal to a number N. In case of a fully connected transition matrix, where all transitions have a non-zero probability, this condition is fulfilled with N = 1. A Markov chain with more than one state and just one out-going transition per state is either not irreducible or not aperiodic, hence cannot be ergodic. Terminology Some authors call any irreducible, positive recurrent Markov chains ergodic, even periodic ones. In fact, merely irreducible Markov chains correspond to ergodic processes, defined according to ergodic theory. Some authors call a matrix primitive iff there exists some integer such that all entries of are positive. Some authors call it regular. Index of primitivity The index of primitivity, or exponent, of a regular matrix, is the smallest such that all entries of are positive. The exponent is purely a graph-theoretic property, since it depends only on whether each entry of is zero or positive, and therefore can be found on a directed graph with as its adjacency matrix. There are several combinatorial results about the exponent when there are finitely many states. Let be the number of states, then The exponent is . The only case where it is an equality is when the graph of goes like . If has diagonal entries, then its exponent is . If is symmetric, then has positive diagonal entries, which by previous proposition means its exponent is . (Dulmage-Mendelsohn theorem) The exponent is where is the girth of the graph. It can be improved to , where is the diameter of the graph. Measure-preserving dynamical system If a Markov chain has a stationary distribution, then it can be converted to a measure-preserving dynamical system: Let the probability space be , where is the set of all states for the Markov chain. Let the sigma-algebra on the probability space be generated by the cylinder sets. Let the probability measure be generated by the stationary distribution, and the Markov chain transition. Let be the shift operator: . Similarly we can construct such a dynamical system with instead. Since irreducible Markov chains with finite state spaces have a unique stationary distribution, the above construction is unambiguous for irreducible Markov chains. In ergodic theory, a measure-preserving dynamical system is called "ergodic" iff any measurable subset such that implies or (up to a null set). The terminology is inconsistent. Given a Markov chain with a stationary distribution that is strictly positive on all states, the Markov chain is irreducible iff its corresponding measure-preserving dynamical system is ergodic. Markovian representations In some cases, apparently non-Markovian processes may still have Markovian representations, constructed by expanding the concept of the "current" and "future" states. For example, let X be a non-Markovian process. Then define a process Y, such that each state of Y represents a time-interval of states of X. Mathematically, this takes the form: If Y has the Markov property, then it is a Markovian representation of X. An example of a non-Markovian process with a Markovian representation is an autoregressive time series of order greater than one. Hitting times The hitting time is the time, starting in a given set of states until the chain arrives in a given state or set of states. The distribution of such a time period has a phase type distribution. The simplest such distribution is that of a single exponentially distributed transition. Expected hitting times For a subset of states A ⊆ S, the vector kA of hitting times (where element represents the expected value, starting in state i that the chain enters one of the states in the set A) is the minimal non-negative solution to Time reversal For a CTMC Xt, the time-reversed process is defined to be . By Kelly's lemma this process has the same stationary distribution as the forward process. A chain is said to be reversible if the reversed process is the same as the forward process. Kolmogorov's criterion states that the necessary and sufficient condition for a process to be reversible is that the product of transition rates around a closed loop must be the same in both directions. Embedded Markov chain One method of finding the stationary probability distribution, , of an ergodic continuous-time Markov chain, Q, is by first finding its embedded Markov chain (EMC). Strictly speaking, the EMC is a regular discrete-time Markov chain, sometimes referred to as a jump process. Each element of the one-step transition probability matrix of the EMC, S, is denoted by sij, and represents the conditional probability of transitioning from state i into state j. These conditional probabilities may be found by From this, S may be written as where I is the identity matrix and diag(Q) is the diagonal matrix formed by selecting the main diagonal from the matrix Q and setting all other elements to zero. To find the stationary probability distribution vector, we must next find such that with being a row vector, such that all elements in are greater than 0 and = 1. From this, may be found as (S may be periodic, even if Q is not. Once is found, it must be normalized to a unit vector.) Another discrete-time process that may be derived from a continuous-time Markov chain is a δ-skeleton—the (discrete-time) Markov chain formed by observing X(t) at intervals of δ units of time. The random variables X(0), X(δ), X(2δ), ... give the sequence of states visited by the δ-skeleton. Special types of Markov chains Markov model Markov models are used to model changing systems. There are 4 main types of models, that generalize Markov chains depending on whether every sequential state is observable or not, and whether the system is to be adjusted on the basis of observations made: Bernoulli scheme A Bernoulli scheme is a special case of a Markov chain where the transition probability matrix has identical rows, which means that the next state is independent of even the current state (in addition to being independent of the past states). A Bernoulli scheme with only two possible states is known as a Bernoulli process. Note, however, by the Ornstein isomorphism theorem, that every aperiodic and irreducible Markov chain is isomorphic to a Bernoulli scheme; thus, one might equally claim that Markov chains are a "special case" of Bernoulli schemes. The isomorphism generally requires a complicated recoding. The isomorphism theorem is even a bit stronger: it states that any stationary stochastic process is isomorphic to a Bernoulli scheme; the Markov chain is just one such example. Subshift of finite type When the Markov matrix is replaced by the adjacency matrix of a finite graph, the resulting shift is termed a topological Markov chain or a subshift of finite type. A Markov matrix that is compatible with the adjacency matrix can then provide a measure on the subshift. Many chaotic dynamical systems are isomorphic to topological Markov chains; examples include diffeomorphisms of closed manifolds, the Prouhet–Thue–Morse system, the Chacon system, sofic systems, context-free systems and block-coding systems. Applications Markov chains have been employed in a wide range of topics across the natural and social sciences, and in technological applications. Physics Markovian systems appear extensively in thermodynamics and statistical mechanics, whenever probabilities are used to represent unknown or unmodelled details of the system, if it can be assumed that the dynamics are time-invariant, and that no relevant history need be considered which is not already included in the state description. For example, a thermodynamic state operates under a probability distribution that is difficult or expensive to acquire. Therefore, Markov Chain Monte Carlo method can be used to draw samples randomly from a black-box to approximate the probability distribution of attributes over a range of objects. Markov chains are used in lattice QCD simulations. Chemistry A reaction network is a chemical system involving multiple reactions and chemical species. The simplest stochastic models of such networks treat the system as a continuous time Markov chain with the state being the number of molecules of each species and with reactions modeled as possible transitions of the chain. Markov chains and continuous-time Markov processes are useful in chemistry when physical systems closely approximate the Markov property. For example, imagine a large number n of molecules in solution in state A, each of which can undergo a chemical reaction to state B with a certain average rate. Perhaps the molecule is an enzyme, and the states refer to how it is folded. The state of any single enzyme follows a Markov chain, and since the molecules are essentially independent of each other, the number of molecules in state A or B at a time is n times the probability a given molecule is in that state. The classical model of enzyme activity, Michaelis–Menten kinetics, can be viewed as a Markov chain, where at each time step the reaction proceeds in some direction. While Michaelis-Menten is fairly straightforward, far more complicated reaction networks can also be modeled with Markov chains. An algorithm based on a Markov chain was also used to focus the fragment-based growth of chemicals in silico towards a desired class of compounds such as drugs or natural products. As a molecule is grown, a fragment is selected from the nascent molecule as the "current" state. It is not aware of its past (that is, it is not aware of what is already bonded to it). It then transitions to the next state when a fragment is attached to it. The transition probabilities are trained on databases of authentic classes of compounds. Also, the growth (and composition) of copolymers may be modeled using Markov chains. Based on the reactivity ratios of the monomers that make up the growing polymer chain, the chain's composition may be calculated (for example, whether monomers tend to add in alternating fashion or in long runs of the same monomer). Due to steric effects, second-order Markov effects may also play a role in the growth of some polymer chains. Similarly, it has been suggested that the crystallization and growth of some epitaxial superlattice oxide materials can be accurately described by Markov chains. Biology Markov chains are used in various areas of biology. Notable examples include: Phylogenetics and bioinformatics, where most models of DNA evolution use continuous-time Markov chains to describe the nucleotide present at a given site in the genome. Population dynamics, where Markov chains are in particular a central tool in the theoretical study of matrix population models. Neurobiology, where Markov chains have been used, e.g., to simulate the mammalian neocortex. Systems biology, for instance with the modeling of viral infection of single cells. Compartmental models for disease outbreak and epidemic modeling. Testing Several theorists have proposed the idea of the Markov chain statistical test (MCST), a method of conjoining Markov chains to form a "Markov blanket", arranging these chains in several recursive layers ("wafering") and producing more efficient test sets—samples—as a replacement for exhaustive testing. Solar irradiance variability Solar irradiance variability assessments are useful for solar power applications. Solar irradiance variability at any location over time is mainly a consequence of the deterministic variability of the sun's path across the sky dome and the variability in cloudiness. The variability of accessible solar irradiance on Earth's surface has been modeled using Markov chains, also including modeling the two states of clear and cloudiness as a two-state Markov chain. Speech recognition Hidden Markov models have been used in automatic speech recognition systems. Information theory Markov chains are used throughout information processing. Claude Shannon's famous 1948 paper A Mathematical Theory of Communication, which in a single step created the field of information theory, opens by introducing the concept of entropy by modeling texts in a natural language (such as English) as generated by an ergodic Markov process, where each letter may depend statistically on previous letters. Such idealized models can capture many of the statistical regularities of systems. Even without describing the full structure of the system perfectly, such signal models can make possible very effective data compression through entropy encoding techniques such as arithmetic coding. They also allow effective state estimation and pattern recognition. Markov chains also play an important role in reinforcement learning. Markov chains are also the basis for hidden Markov models, which are an important tool in such diverse fields as telephone networks (which use the Viterbi algorithm for error correction), speech recognition and bioinformatics (such as in rearrangements detection). The LZMA lossless data compression algorithm combines Markov chains with Lempel-Ziv compression to achieve very high compression ratios. Queueing theory Markov chains are the basis for the analytical treatment of queues (queueing theory). Agner Krarup Erlang initiated the subject in 1917. This makes them critical for optimizing the performance of telecommunications networks, where messages must often compete for limited resources (such as bandwidth). Numerous queueing models use continuous-time Markov chains. For example, an M/M/1 queue is a CTMC on the non-negative integers where upward transitions from i to i + 1 occur at rate λ according to a Poisson process and describe job arrivals, while transitions from i to i – 1 (for i > 1) occur at rate μ (job service times are exponentially distributed) and describe completed services (departures) from the queue. Internet applications The PageRank of a webpage as used by Google is defined by a Markov chain. It is the probability to be at page in the stationary distribution on the following Markov chain on all (known) webpages. If is the number of known webpages, and a page has links to it then it has transition probability for all pages that are linked to and for all pages that are not linked to. The parameter is taken to be about 0.15. Markov models have also been used to analyze web navigation behavior of users. A user's web link transition on a particular website can be modeled using first- or second-order Markov models and can be used to make predictions regarding future navigation and to personalize the web page for an individual user. Statistics Markov chain methods have also become very important for generating sequences of random numbers to accurately reflect very complicated desired probability distributions, via a process called Markov chain Monte Carlo (MCMC). In recent years this has revolutionized the practicability of Bayesian inference methods, allowing a wide range of posterior distributions to be simulated and their parameters found numerically. Economics and finance Markov chains are used in finance and economics to model a variety of different phenomena, including the distribution of income, the size distribution of firms, asset prices and market crashes. D. G. Champernowne built a Markov chain model of the distribution of income in 1953. Herbert A. Simon and co-author Charles Bonini used a Markov chain model to derive a stationary Yule distribution of firm sizes. Louis Bachelier was the first to observe that stock prices followed a random walk. The random walk was later seen as evidence in favor of the efficient-market hypothesis and random walk models were popular in the literature of the 1960s. Regime-switching models of business cycles were popularized by James D. Hamilton (1989), who used a Markov chain to model switches between periods of high and low GDP growth (or, alternatively, economic expansions and recessions). A more recent example is the Markov switching multifractal model of Laurent E. Calvet and Adlai J. Fisher, which builds upon the convenience of earlier regime-switching models. It uses an arbitrarily large Markov chain to drive the level of volatility of asset returns. Dynamic macroeconomics makes heavy use of Markov chains. An example is using Markov chains to exogenously model prices of equity (stock) in a general equilibrium setting. Credit rating agencies produce annual tables of the transition probabilities for bonds of different credit ratings. Social sciences Markov chains are generally used in describing path-dependent arguments, where current structural configurations condition future outcomes. An example is the reformulation of the idea, originally due to Karl Marx's , tying economic development to the rise of capitalism. In current research, it is common to use a Markov chain to model how once a country reaches a specific level of economic development, the configuration of structural factors, such as size of the middle class, the ratio of urban to rural residence, the rate of political mobilization, etc., will generate a higher probability of transitioning from authoritarian to democratic regime. Games Markov chains can be used to model many games of chance. The children's games Snakes and Ladders and "Hi Ho! Cherry-O", for example, are represented exactly by Markov chains. At each turn, the player starts in a given state (on a given square) and from there has fixed odds of moving to certain other states (squares). Music Markov chains are employed in algorithmic music composition, particularly in software such as Csound, Max, and SuperCollider. In a first-order chain, the states of the system become note or pitch values, and a probability vector for each note is constructed, completing a transition probability matrix (see below). An algorithm is constructed to produce output note values based on the transition matrix weightings, which could be MIDI note values, frequency (Hz), or any other desirable metric. A second-order Markov chain can be introduced by considering the current state and also the previous state, as indicated in the second table. Higher, nth-order chains tend to "group" particular notes together, while 'breaking off' into other patterns and sequences occasionally. These higher-order chains tend to generate results with a sense of phrasal structure, rather than the 'aimless wandering' produced by a first-order system. Markov chains can be used structurally, as in Xenakis's Analogique A and B. Markov chains are also used in systems which use a Markov model to react interactively to music input. Usually musical systems need to enforce specific control constraints on the finite-length sequences they generate, but control constraints are not compatible with Markov models, since they induce long-range dependencies that violate the Markov hypothesis of limited memory. In order to overcome this limitation, a new approach has been proposed. Baseball Markov chain models have been used in advanced baseball analysis since 1960, although their use is still rare. Each half-inning of a baseball game fits the Markov chain state when the number of runners and outs are considered. During any at-bat, there are 24 possible combinations of number of outs and position of the runners. Mark Pankin shows that Markov chain models can be used to evaluate runs created for both individual players as well as a team. He also discusses various kinds of strategies and play conditions: how Markov chain models have been used to analyze statistics for game situations such as bunting and base stealing and differences when playing on grass vs. AstroTurf. Markov text generators Markov processes can also be used to generate superficially real-looking text given a sample document. Markov processes are used in a variety of recreational "parody generator" software (see dissociated press, Jeff Harrison, Mark V. Shaney, and Academias Neutronium). Several open-source text generation libraries using Markov chains exist. Probabilistic forecasting Markov chains have been used for forecasting in several areas: for example, price trends, wind power, stochastic terrorism, and solar irradiance. The Markov chain forecasting models utilize a variety of settings, from discretizing the time series, to hidden Markov models combined with wavelets, and the Markov chain mixture distribution model (MCM). See also Dynamics of Markovian particles Gauss–Markov process Markov chain approximation method Markov chain geostatistics Markov chain mixing time Markov chain tree theorem Markov decision process Markov information source Markov odometer Markov operator Markov random field Master equation Quantum Markov chain Semi-Markov process Stochastic cellular automaton Telescoping Markov chain Variable-order Markov model Notes References A. A. Markov (1906) "Rasprostranenie zakona bol'shih chisel na velichiny, zavisyaschie drug ot druga". Izvestiya Fiziko-matematicheskogo obschestva pri Kazanskom universitete, 2-ya seriya, tom 15, pp. 135–156. A. A. Markov (1971). "Extension of the limit theorems of probability theory to a sum of variables connected in a chain". reprinted in Appendix B of: R. Howard. Dynamic Probabilistic Systems, volume 1: Markov Chains. John Wiley and Sons. Classical Text in Translation: Leo Breiman (1992) [1968] Probability. Original edition published by Addison-Wesley; reprinted by Society for Industrial and Applied Mathematics . (See Chapter 7) J. L. Doob (1953) Stochastic Processes. New York: John Wiley and Sons . S. P. Meyn and R. L. Tweedie (1993) Markov Chains and Stochastic Stability. London: Springer-Verlag . online: MCSS . Second edition to appear, Cambridge University Press, 2009. ; (NB. This was originally published in Russian as (Markovskiye protsessy) by Fizmatgiz in 1963 and translated to English with the assistance of the author.) S. P. Meyn. Control Techniques for Complex Networks. Cambridge University Press, 2007. . Appendix contains abridged Meyn & Tweedie. online: CTCN ] Extensive, wide-ranging book meant for specialists, written for both theoretical computer scientists as well as electrical engineers. With detailed explanations of state minimization techniques, FSMs, Turing machines, Markov processes, and undecidability. Excellent treatment of Markov processes pp. 449ff. Discusses Z-transforms, D transforms in their context. Classical text. cf Chapter 6 Finite Markov Chains pp. 384ff. John G. Kemeny & J. Laurie Snell (1960) Finite Markov Chains, D. van Nostrand Company E. Nummelin. "General irreducible Markov chains and non-negative operators". Cambridge University Press, 1984, 2004. Seneta, E. Non-negative matrices and Markov chains. 2nd rev. ed., 1981, XVI, 288 p., Softcover Springer Series in Statistics. (Originally published by Allen & Unwin Ltd., London, 1973) Kishor S. Trivedi, Probability and Statistics with Reliability, Queueing, and Computer Science Applications, John Wiley & Sons, Inc. New York, 2002. . K. S. Trivedi and R.A.Sahner, SHARPE at the age of twenty-two, vol. 36, no. 4, pp. 52–57, ACM SIGMETRICS Performance Evaluation Review, 2009. R. A. Sahner, K. S. Trivedi and A. Puliafito, Performance and reliability analysis of computer systems: an example-based approach using the SHARPE software package, Kluwer Academic Publishers, 1996. . G. Bolch, S. Greiner, H. de Meer and K. S. Trivedi, Queueing Networks and Markov Chains, John Wiley, 2nd edition, 2006. . External links Markov Chains chapter in American Mathematical Society's introductory probability book A visual explanation of Markov Chains Original paper by A.A Markov (1913): An Example of Statistical Investigation of the Text Eugene Onegin Concerning the Connection of Samples in Chains (translated from Russian) Markov processes Markov models Graph theory Random text generation
0.771243
0.999366
0.770754
Non-covalent interaction
In chemistry, a non-covalent interaction differs from a covalent bond in that it does not involve the sharing of electrons, but rather involves more dispersed variations of electromagnetic interactions between molecules or within a molecule. The chemical energy released in the formation of non-covalent interactions is typically on the order of 1–5 kcal/mol (1000–5000 calories per 6.02 molecules). Non-covalent interactions can be classified into different categories, such as electrostatic, π-effects, van der Waals forces, and hydrophobic effects. Non-covalent interactions are critical in maintaining the three-dimensional structure of large molecules, such as proteins and nucleic acids. They are also involved in many biological processes in which large molecules bind specifically but transiently to one another (see the properties section of the DNA page). These interactions also heavily influence drug design, crystallinity and design of materials, particularly for self-assembly, and, in general, the synthesis of many organic molecules. The non-covalent interactions may occur between different parts of the same molecule (e.g. during protein folding) or between different molecules and therefore are discussed also as intermolecular forces. Electrostatic interactions Ionic Ionic interactions involve the attraction of ions or molecules with full permanent charges of opposite signs. For example, sodium fluoride involves the attraction of the positive charge on sodium (Na+) with the negative charge on fluoride (F−). However, this particular interaction is easily broken upon addition to water, or other highly polar solvents. In water ion pairing is mostly entropy driven; a single salt bridge usually amounts to an attraction value of about ΔG =5 kJ/mol at an intermediate ion strength I, at I close to zero the value increases to about 8 kJ/mol. The ΔG values are usually additive and largely independent of the nature of the participating ions, except for transition metal ions etc. These interactions can also be seen in molecules with a localized charge on a particular atom. For example, the full negative charge associated with ethoxide, the conjugate base of ethanol, is most commonly accompanied by the positive charge of an alkali metal salt such as the sodium cation (Na+). Hydrogen bonding A hydrogen bond (H-bond), is a specific type of interaction that involves dipole–dipole attraction between a partially positive hydrogen atom and a highly electronegative, partially negative oxygen, nitrogen, sulfur, or fluorine atom (not covalently bound to said hydrogen atom). It is not a covalent bond, but instead is classified as a strong non-covalent interaction. It is responsible for why water is a liquid at room temperature and not a gas (given water's low molecular weight). Most commonly, the strength of hydrogen bonds lies between 0–4 kcal/mol, but can sometimes be as strong as 40 kcal/mol In solvents such as chloroform or carbon tetrachloride one observes e.g. for the interaction between amides additive values of about 5 kJ/mol. According to Linus Pauling the strength of a hydrogen bond is essentially determined by the electrostatic charges. Measurements of thousands of complexes in chloroform or carbon tetrachloride have led to additive free energy increments for all kind of donor-acceptor combinations. Halogen bonding Halogen bonding is a type of non-covalent interaction which does not involve the formation nor breaking of actual bonds, but rather is similar to the dipole–dipole interaction known as hydrogen bonding. In halogen bonding, a halogen atom acts as an electrophile, or electron-seeking species, and forms a weak electrostatic interaction with a nucleophile, or electron-rich species. The nucleophilic agent in these interactions tends to be highly electronegative (such as oxygen, nitrogen, or sulfur), or may be anionic, bearing a negative formal charge. As compared to hydrogen bonding, the halogen atom takes the place of the partially positively charged hydrogen as the electrophile. Halogen bonding should not be confused with halogen–aromatic interactions, as the two are related but differ by definition. Halogen–aromatic interactions involve an electron-rich aromatic π-cloud as a nucleophile; halogen bonding is restricted to monatomic nucleophiles. Van der Waals forces Van der Waals forces are a subset of electrostatic interactions involving permanent or induced dipoles (or multipoles). These include the following: permanent dipole–dipole interactions, alternatively called the Keesom force dipole-induced dipole interactions, or the Debye force induced dipole-induced dipole interactions, commonly referred to as London dispersion forces Hydrogen bonding and halogen bonding are typically not classified as Van der Waals forces. Dipole–dipole Dipole-dipole interactions are electrostatic interactions between permanent dipoles in molecules. These interactions tend to align the molecules to increase attraction (reducing potential energy). Normally, dipoles are associated with electronegative atoms, including oxygen, nitrogen, sulfur, and fluorine. For example, acetone, the active ingredient in some nail polish removers, has a net dipole associated with the carbonyl (see figure 2). Since oxygen is more electronegative than the carbon that is covalently bonded to it, the electrons associated with that bond will be closer to the oxygen than the carbon, creating a partial negative charge (δ−) on the oxygen, and a partial positive charge (δ+) on the carbon. They are not full charges because the electrons are still shared through a covalent bond between the oxygen and carbon. If the electrons were no longer being shared, then the oxygen-carbon bond would be an electrostatic interaction. Often molecules contain dipolar groups, but have no overall dipole moment. This occurs if there is symmetry within the molecule that causes the dipoles to cancel each other out. This occurs in molecules such as tetrachloromethane. Note that the dipole-dipole interaction between two individual atoms is usually zero, since atoms rarely carry a permanent dipole. See atomic dipoles. Dipole-induced dipole A dipole-induced dipole interaction (Debye force) is due to the approach of a molecule with a permanent dipole to another non-polar molecule with no permanent dipole. This approach causes the electrons of the non-polar molecule to be polarized toward or away from the dipole (or "induce" a dipole) of the approaching molecule. Specifically, the dipole can cause electrostatic attraction or repulsion of the electrons from the non-polar molecule, depending on orientation of the incoming dipole. Atoms with larger atomic radii are considered more "polarizable" and therefore experience greater attractions as a result of the Debye force. London dispersion forces London dispersion forces are the weakest type of non-covalent interaction. In organic molecules, however, the multitude of contacts can lead to larger contributions, particularly in the presence of heteroatoms. They are also known as "induced dipole-induced dipole interactions" and present between all molecules, even those which inherently do not have permanent dipoles. Dispersive interactions increase with the polarizability of interacting groups, but are weakened by solvents of increased polarizability. They are caused by the temporary repulsion of electrons away from the electrons of a neighboring molecule, leading to a partially positive dipole on one molecule and a partially negative dipole on another molecule. Hexane is a good example of a molecule with no polarity or highly electronegative atoms, yet is a liquid at room temperature due mainly to London dispersion forces. In this example, when one hexane molecule approaches another, a temporary, weak partially negative dipole on the incoming hexane can polarize the electron cloud of another, causing a partially positive dipole on that hexane molecule. In absence of solvents hydrocarbons such as hexane form crystals due to dispersive forces ; the sublimation heat of crystals is a measure of the dispersive interaction. While these interactions are short-lived and very weak, they can be responsible for why certain non-polar molecules are liquids at room temperature. π-effects π-effects can be broken down into numerous categories, including π-stacking, cation-π and anion-π interactions, and polar-π interactions. In general, π-effects are associated with the interactions of molecules with the π-systems of arenes. π–π interaction π–π interactions are associated with the interaction between the π-orbitals of a molecular system. The high polarizability of aromatic rings lead to dispersive interactions as major contribution to so-called stacking effects. These play a major role for interactions of nucleobases e.g. in DNA. For a simple example, a benzene ring, with its fully conjugated π cloud, will interact in two major ways (and one minor way) with a neighboring benzene ring through a π–π interaction (see figure 3). The two major ways that benzene stacks are edge-to-face, with an enthalpy of ~2 kcal/mol, and displaced (or slip stacked), with an enthalpy of ~2.3 kcal/mol. The sandwich configuration is not nearly as stable of an interaction as the previously two mentioned due to high electrostatic repulsion of the electrons in the π orbitals. Cation–π and anion–π interaction Cation–pi interactions can be as strong or stronger than H-bonding in some contexts. Anion–π interactions are very similar to cation–π interactions, but reversed. In this case, an anion sits atop an electron-poor π-system, usually established by the presence of electron-withdrawing substituents on the conjugated molecule Polar–π Polar–π interactions involve molecules with permanent dipoles (such as water) interacting with the quadrupole moment of a π-system (such as that in benzene (see figure 5). While not as strong as a cation-π interaction, these interactions can be quite strong (~1-2 kcal/mol), and are commonly involved in protein folding and crystallinity of solids containing both hydrogen bonding and π-systems. In fact, any molecule with a hydrogen bond donor (hydrogen bound to a highly electronegative atom) will have favorable electrostatic interactions with the electron-rich π-system of a conjugated molecule. Hydrophobic effect The hydrophobic effect is the desire for non-polar molecules to aggregate in aqueous solutions in order to separate from water. This phenomenon leads to minimum exposed surface area of non-polar molecules to the polar water molecules (typically spherical droplets), and is commonly used in biochemistry to study protein folding and other various biological phenomenon. The effect is also commonly seen when mixing various oils (including cooking oil) and water. Over time, oil sitting on top of water will begin to aggregate into large flattened spheres from smaller droplets, eventually leading to a film of all oil sitting atop a pool of water. However the hydrophobic effect is not considered a non-covalent interaction as it is a function of entropy and not a specific interaction between two molecules, usually characterized by entropy.enthalpy compensation. An essentially enthalpic hydrophobic effect materializes if a limited number of water molecules are restricted within a cavity; displacement of such water molecules by a ligand frees the water molecules which then in the bulk water enjoy a maximum of hydrogen bonds close to four. Examples Drug design Most pharmaceutical drugs are small molecules which elicit a physiological response by "binding" to enzymes or receptors, causing an increase or decrease in the enzyme's ability to function. The binding of a small molecule to a protein is governed by a combination of steric, or spatial considerations, in addition to various non-covalent interactions, although some drugs do covalently modify an active site (see irreversible inhibitors). Using the "lock and key model" of enzyme binding, a drug (key) must be of roughly the proper dimensions to fit the enzyme's binding site (lock). Using the appropriately sized molecular scaffold, drugs must also interact with the enzyme non-covalently in order to maximize binding affinity binding constant and reduce the ability of the drug to dissociate from the binding site. This is achieved by forming various non-covalent interactions between the small molecule and amino acids in the binding site, including: hydrogen bonding, electrostatic interactions, pi stacking, van der Waals interactions, and dipole–dipole interactions. Non-covalent metallo drugs have been developed. For example, dinuclear triple-helical compounds in which three ligand strands wrap around two metals, resulting in a roughly cylindrical tetracation have been prepared. These compounds bind to the less-common nucleic acid structures, such as duplex DNA, Y-shaped fork structures and 4-way junctions. Protein folding and structure The folding of proteins from a primary (linear) sequence of amino acids to a three-dimensional structure is directed by all types of non-covalent interactions, including the hydrophobic forces and formation of intramolecular hydrogen bonds. Three-dimensional structures of proteins, including the secondary and tertiary structures, are stabilized by formation of hydrogen bonds. Through a series of small conformational changes, spatial orientations are modified so as to arrive at the most energetically minimized orientation achievable. The folding of proteins is often facilitated by enzymes known as molecular chaperones. Sterics, bond strain, and angle strain also play major roles in the folding of a protein from its primary sequence to its tertiary structure. Single tertiary protein structures can also assemble to form protein complexes composed of multiple independently folded subunits. As a whole, this is called a protein's quaternary structure. The quaternary structure is generated by the formation of relatively strong non-covalent interactions, such as hydrogen bonds, between different subunits to generate a functional polymeric enzyme. Some proteins also utilize non-covalent interactions to bind cofactors in the active site during catalysis, however a cofactor can also be covalently attached to an enzyme. Cofactors can be either organic or inorganic molecules which assist in the catalytic mechanism of the active enzyme. The strength with which a cofactor is bound to an enzyme may vary greatly; non-covalently bound cofactors are typically anchored by hydrogen bonds or electrostatic interactions. Boiling points Non-covalent interactions have a significant effect on the boiling point of a liquid. Boiling point is defined as the temperature at which the vapor pressure of a liquid is equal to the pressure surrounding the liquid. More simply, it is the temperature at which a liquid becomes a gas. As one might expect, the stronger the non-covalent interactions present for a substance, the higher its boiling point. For example, consider three compounds of similar chemical composition: sodium n-butoxide (C4H9ONa), diethyl ether (C4H10O), and n-butanol (C4H9OH). The predominant non-covalent interactions associated with each species in solution are listed in the above figure. As previously discussed, ionic interactions require considerably more energy to break than hydrogen bonds, which in turn are require more energy than dipole–dipole interactions. The trends observed in their boiling points (figure 8) shows exactly the correlation expected, where sodium n-butoxide requires significantly more heat energy (higher temperature) to boil than n-butanol, which boils at a much higher temperature than diethyl ether. The heat energy required for a compound to change from liquid to gas is associated with the energy required to break the intermolecular forces each molecule experiences in its liquid state. References Chemical bonding Supramolecular chemistry pt:Interação não covalente
0.779855
0.988323
0.770748
Ultraviolet–visible spectroscopy
Ultraviolet (UV) spectroscopy or ultraviolet–visible (UV–VIS) spectrophotometry refers to absorption spectroscopy or reflectance spectroscopy in part of the ultraviolet and the full, adjacent visible regions of the electromagnetic spectrum. Being relatively inexpensive and easily implemented, this methodology is widely used in diverse applied and fundamental applications. The only requirement is that the sample absorb in the UV-Vis region, i.e. be a chromophore. Absorption spectroscopy is complementary to fluorescence spectroscopy. Parameters of interest, besides the wavelength of measurement, are absorbance (A) or transmittance (%T) or reflectance (%R), and its change with time. A UV-vis spectrophotometer is an analytical instrument that measures the amount of ultraviolet (UV) and visible light that is absorbed by a sample. It is a widely used technique in chemistry, biochemistry, and other fields, to identify and quantify compounds in a variety of samples. UV-vis spectrophotometers work by passing a beam of light through the sample and measuring the amount of light that is absorbed at each wavelength. The amount of light absorbed is proportional to the concentration of the absorbing compound in the sample Optical transitions Most molecules and ions absorb energy in the ultraviolet or visible range, i.e., they are chromophores. The absorbed photon excites an electron in the chromophore to higher energy molecular orbitals, giving rise to an excited state. For organic chromophores, four possible types of transitions are assumed: π–π*, n–π*, σ–σ*, and n–σ*. Transition metal complexes are often colored (i.e., absorb visible light) owing to the presence of multiple electronic states associated with incompletely filled d orbitals. Applications UV/Vis can be used to monitor structural changes in DNA. UV/Vis spectroscopy is routinely used in analytical chemistry for the quantitative determination of diverse analytes or sample, such as transition metal ions, highly conjugated organic compounds, and biological macromolecules. Spectroscopic analysis is commonly carried out in solutions but solids and gases may also be studied. Organic compounds, especially those with a high degree of conjugation, also absorb light in the UV or visible regions of the electromagnetic spectrum. The solvents for these determinations are often water for water-soluble compounds, or ethanol for organic-soluble compounds. (Organic solvents may have significant UV absorption; not all solvents are suitable for use in UV spectroscopy. Ethanol absorbs very weakly at most wavelengths.) Solvent polarity and pH can affect the absorption spectrum of an organic compound. Tyrosine, for example, increases in absorption maxima and molar extinction coefficient when pH increases from 6 to 13 or when solvent polarity decreases. While charge transfer complexes also give rise to colours, the colours are often too intense to be used for quantitative measurement. The Beer–Lambert law states that the absorbance of a solution is directly proportional to the concentration of the absorbing species in the solution and the path length. Thus, for a fixed path length, UV/Vis spectroscopy can be used to determine the concentration of the absorber in a solution. It is necessary to know how quickly the absorbance changes with concentration. This can be taken from references (tables of molar extinction coefficients), or more accurately, determined from a calibration curve. A UV/Vis spectrophotometer may be used as a detector for HPLC. The presence of an analyte gives a response assumed to be proportional to the concentration. For accurate results, the instrument's response to the analyte in the unknown should be compared with the response to a standard; this is very similar to the use of calibration curves. The response (e.g., peak height) for a particular concentration is known as the response factor. The wavelengths of absorption peaks can be correlated with the types of bonds in a given molecule and are valuable in determining the functional groups within a molecule. The Woodward–Fieser rules, for instance, are a set of empirical observations used to predict λmax, the wavelength of the most intense UV/Vis absorption, for conjugated organic compounds such as dienes and ketones. The spectrum alone is not, however, a specific test for any given sample. The nature of the solvent, the pH of the solution, temperature, high electrolyte concentrations, and the presence of interfering substances can influence the absorption spectrum. Experimental variations such as the slit width (effective bandwidth) of the spectrophotometer will also alter the spectrum. To apply UV/Vis spectroscopy to analysis, these variables must be controlled or accounted for in order to identify the substances present. The method is most often used in a quantitative way to determine concentrations of an absorbing species in solution, using the Beer–Lambert law: , where A is the measured absorbance (formally dimensionless but generally reported in absorbance units (AU)), is the intensity of the incident light at a given wavelength, is the transmitted intensity, L the path length through the sample, and c the concentration of the absorbing species. For each species and wavelength, ε is a constant known as the molar absorptivity or extinction coefficient. This constant is a fundamental molecular property in a given solvent, at a particular temperature and pressure, and has units of . The absorbance and extinction ε are sometimes defined in terms of the natural logarithm instead of the base-10 logarithm. The Beer–Lambert law is useful for characterizing many compounds but does not hold as a universal relationship for the concentration and absorption of all substances. A 2nd order polynomial relationship between absorption and concentration is sometimes encountered for very large, complex molecules such as organic dyes (xylenol orange or neutral red, for example). UV–Vis spectroscopy is also used in the semiconductor industry to measure the thickness and optical properties of thin films on a wafer. UV–Vis spectrometers are used to measure the reflectance of light, and can be analyzed via the Forouhi–Bloomer dispersion equations to determine the index of refraction and the extinction coefficient of a given film across the measured spectral range. Practical considerations The Beer–Lambert law has implicit assumptions that must be met experimentally for it to apply; otherwise there is a possibility of deviations from the law. For instance, the chemical makeup and physical environment of the sample can alter its extinction coefficient. The chemical and physical conditions of a test sample therefore must match reference measurements for conclusions to be valid. Worldwide, pharmacopoeias such as the American (USP) and European (Ph. Eur.) pharmacopeias demand that spectrophotometers perform according to strict regulatory requirements encompassing factors such as stray light and wavelength accuracy. Spectral bandwidth Spectral bandwidth of a spectrophotometer is the range of wavelengths that the instrument transmits through a sample at a given time. It is determined by the light source, the monochromator, its physical slit-width and optical dispersion and the detector of the spectrophotometer. The spectral bandwidth affects the resolution and accuracy of the measurement. A narrower spectral bandwidth provides higher resolution and accuracy, but also requires more time and energy to scan the entire spectrum. A wider spectral bandwidth allows for faster and easier scanning, but may result in lower resolution and accuracy, especially for samples with overlapping absorption peaks. Therefore, choosing an appropriate spectral bandwidth is important for obtaining reliable and precise results. It is important to have a monochromatic source of radiation for the light incident on the sample cell to enhance the linearity of the response. The closer the bandwidth is to be monochromatic (transmitting unit of wavelength) the more linear will be the response. The spectral bandwidth is measured as the number of wavelengths transmitted at half the maximum intensity of the light leaving the monochromator. The best spectral bandwidth achievable is a specification of the UV spectrophotometer, and it characterizes how monochromatic the incident light can be. If this bandwidth is comparable to (or more than) the width of the absorption peak of the sample component, then the measured extinction coefficient will not be accurate. In reference measurements, the instrument bandwidth (bandwidth of the incident light) is kept below the width of the spectral peaks. When a test material is being measured, the bandwidth of the incident light should also be sufficiently narrow. Reducing the spectral bandwidth reduces the energy passed to the detector and will, therefore, require a longer measurement time to achieve the same signal to noise ratio. Wavelength error The extinction coefficient of an analyte in solution changes gradually with wavelength. A peak (a wavelength where the absorbance reaches a maximum) in the absorbance curve vs wavelength, i.e. the UV-VIS spectrum, is where the rate of change of absorbance with wavelength is the lowest. Therefore, quantitative measurements of a solute are usually conducted, using a wavelength around the absorbance peak, to minimize inaccuracies produced by errors in wavelength, due to the change of extinction coefficient with wavelength. Stray light Stray light in a UV spectrophotometer is any light that reaches its detector that is not of the wavelength selected by the monochromator. This can be caused, for instance, by scattering of light within the instrument, or by reflections from optical surfaces. Stray light can cause significant errors in absorbance measurements, especially at high absorbances, because the stray light will be added to the signal detected by the detector, even though it is not part of the actually selected wavelength. The result is that the measured and reported absorbance will be lower than the actual absorbance of the sample. The stray light is an important factor, as it determines the purity of the light used for the analysis. The most important factor affecting it is the stray light level of the monochromator. Typically a detector used in a UV-VIS spectrophotometer is broadband; it responds to all the light that reaches it. If a significant amount of the light passed through the sample contains wavelengths that have much lower extinction coefficients than the nominal one, the instrument will report an incorrectly low absorbance. Any instrument will reach a point where an increase in sample concentration will not result in an increase in the reported absorbance, because the detector is simply responding to the stray light. In practice the concentration of the sample or the optical path length must be adjusted to place the unknown absorbance within a range that is valid for the instrument. Sometimes an empirical calibration function is developed, using known concentrations of the sample, to allow measurements into the region where the instrument is becoming non-linear. As a rough guide, an instrument with a single monochromator would typically have a stray light level corresponding to about 3 Absorbance Units (AU), which would make measurements above about 2 AU problematic. A more complex instrument with a double monochromator would have a stray light level corresponding to about 6 AU, which would therefore allow measuring a much wider absorbance range. Deviations from the Beer–Lambert law At sufficiently high concentrations, the absorption bands will saturate and show absorption flattening. The absorption peak appears to flatten because close to 100% of the light is already being absorbed. The concentration at which this occurs depends on the particular compound being measured. One test that can be used to test for this effect is to vary the path length of the measurement. In the Beer–Lambert law, varying concentration and path length has an equivalent effect—diluting a solution by a factor of 10 has the same effect as shortening the path length by a factor of 10. If cells of different path lengths are available, testing if this relationship holds true is one way to judge if absorption flattening is occurring. Solutions that are not homogeneous can show deviations from the Beer–Lambert law because of the phenomenon of absorption flattening. This can happen, for instance, where the absorbing substance is located within suspended particles. The deviations will be most noticeable under conditions of low concentration and high absorbance. The last reference describes a way to correct for this deviation. Some solutions, like copper(II) chloride in water, change visually at a certain concentration because of changed conditions around the coloured ion (the divalent copper ion). For copper(II) chloride it means a shift from blue to green, which would mean that monochromatic measurements would deviate from the Beer–Lambert law. Measurement uncertainty sources The above factors contribute to the measurement uncertainty of the results obtained with UV/Vis spectrophotometry. If UV/Vis spectrophotometry is used in quantitative chemical analysis then the results are additionally affected by uncertainty sources arising from the nature of the compounds and/or solutions that are measured. These include spectral interferences caused by absorption band overlap, fading of the color of the absorbing species (caused by decomposition or reaction) and possible composition mismatch between the sample and the calibration solution. Ultraviolet–visible spectrophotometer The instrument used in ultraviolet–visible spectroscopy is called a UV/Vis spectrophotometer. It measures the intensity of light after passing through a sample, and compares it to the intensity of light before it passes through the sample. The ratio is called the transmittance, and is usually expressed as a percentage (%T). The absorbance, , is based on the transmittance: The UV–visible spectrophotometer can also be configured to measure reflectance. In this case, the spectrophotometer measures the intensity of light reflected from a sample, and compares it to the intensity of light reflected from a reference material (such as a white tile). The ratio is called the reflectance, and is usually expressed as a percentage (%R). The basic parts of a spectrophotometer are a light source, a holder for the sample, a diffraction grating or a prism as a monochromator to separate the different wavelengths of light, and a detector. The radiation source is often a tungsten filament (300–2500 nm), a deuterium arc lamp, which is continuous over the ultraviolet region (190–400 nm), a xenon arc lamp, which is continuous from 160 to 2,000 nm; or more recently, light emitting diodes (LED) for the visible wavelengths. The detector is typically a photomultiplier tube, a photodiode, a photodiode array or a charge-coupled device (CCD). Single photodiode detectors and photomultiplier tubes are used with scanning monochromators, which filter the light so that only light of a single wavelength reaches the detector at one time. The scanning monochromator moves the diffraction grating to "step-through" each wavelength so that its intensity may be measured as a function of wavelength. Fixed monochromators are used with CCDs and photodiode arrays. As both of these devices consist of many detectors grouped into one or two dimensional arrays, they are able to collect light of different wavelengths on different pixels or groups of pixels simultaneously. A spectrophotometer can be either single beam or double beam. In a single beam instrument (such as the Spectronic 20), all of the light passes through the sample cell. must be measured by removing the sample. This was the earliest design and is still in common use in both teaching and industrial labs. In a double-beam instrument, the light is split into two beams before it reaches the sample. One beam is used as the reference; the other beam passes through the sample. The reference beam intensity is taken as 100% Transmission (or 0 Absorbance), and the measurement displayed is the ratio of the two beam intensities. Some double-beam instruments have two detectors (photodiodes), and the sample and reference beam are measured at the same time. In other instruments, the two beams pass through a beam chopper, which blocks one beam at a time. The detector alternates between measuring the sample beam and the reference beam in synchronism with the chopper. There may also be one or more dark intervals in the chopper cycle. In this case, the measured beam intensities may be corrected by subtracting the intensity measured in the dark interval before the ratio is taken. In a single-beam instrument, the cuvette containing only a solvent has to be measured first. Mettler Toledo developed a single beam array spectrophotometer that allows fast and accurate measurements over the UV/VIS range. The light source consists of a Xenon flash lamp for the ultraviolet (UV) as well as for the visible (VIS) and near-infrared wavelength regions covering a spectral range from 190 up to 1100 nm. The lamp flashes are focused on a glass fiber which drives the beam of light onto a cuvette containing the sample solution. The beam passes through the sample and specific wavelengths are absorbed by the sample components. The remaining light is collected after the cuvette by a glass fiber and driven into a spectrograph. The spectrograph consists of a diffraction grating that separates the light into the different wavelengths, and a CCD sensor to record the data, respectively. The whole spectrum is thus simultaneously measured, allowing for fast recording. Samples for UV/Vis spectrophotometry are most often liquids, although the absorbance of gases and even of solids can also be measured. Samples are typically placed in a transparent cell, known as a cuvette. Cuvettes are typically rectangular in shape, commonly with an internal width of 1 cm. (This width becomes the path length, , in the Beer–Lambert law.) Test tubes can also be used as cuvettes in some instruments. The type of sample container used must allow radiation to pass over the spectral region of interest. The most widely applicable cuvettes are made of high quality fused silica or quartz glass because these are transparent throughout the UV, visible and near infrared regions. Glass and plastic cuvettes are also common, although glass and most plastics absorb in the UV, which limits their usefulness to visible wavelengths. Specialized instruments have also been made. These include attaching spectrophotometers to telescopes to measure the spectra of astronomical features. UV–visible microspectrophotometers consist of a UV–visible microscope integrated with a UV–visible spectrophotometer. A complete spectrum of the absorption at all wavelengths of interest can often be produced directly by a more sophisticated spectrophotometer. In simpler instruments the absorption is determined one wavelength at a time and then compiled into a spectrum by the operator. By removing the concentration dependence, the extinction coefficient (ε) can be determined as a function of wavelength. Microspectrophotometry UV–visible spectroscopy of microscopic samples is done by integrating an optical microscope with UV–visible optics, white light sources, a monochromator, and a sensitive detector such as a charge-coupled device (CCD) or photomultiplier tube (PMT). As only a single optical path is available, these are single beam instruments. Modern instruments are capable of measuring UV–visible spectra in both reflectance and transmission of micron-scale sampling areas. The advantages of using such instruments is that they are able to measure microscopic samples but are also able to measure the spectra of larger samples with high spatial resolution. As such, they are used in the forensic laboratory to analyze the dyes and pigments in individual textile fibers, microscopic paint chips and the color of glass fragments. They are also used in materials science and biological research and for determining the energy content of coal and petroleum source rock by measuring the vitrinite reflectance. Microspectrophotometers are used in the semiconductor and micro-optics industries for monitoring the thickness of thin films after they have been deposited. In the semiconductor industry, they are used because the critical dimensions of circuitry is microscopic. A typical test of a semiconductor wafer would entail the acquisition of spectra from many points on a patterned or unpatterned wafer. The thickness of the deposited films may be calculated from the interference pattern of the spectra. In addition, ultraviolet–visible spectrophotometry can be used to determine the thickness, along with the refractive index and extinction coefficient of thin films. A map of the film thickness across the entire wafer can then be generated and used for quality control purposes. Additional applications UV/Vis can be applied to characterize the rate of a chemical reaction. Illustrative is the conversion of the yellow-orange and blue isomers of mercury dithizonate. This method of analysis relies on the fact that concentration is linearly proportional to concentration. In the same approach allows determination of equilibria between chromophores. From the spectrum of burning gases, it is possible to determine a chemical composition of a fuel, temperature of gases, and air-fuel ratio. See also Applied spectroscopy Benesi–Hildebrand method Color – Vis spectroscopy with the human eye Charge modulation spectroscopy DU spectrophotometer – first UV–Vis instrument Fourier-transform spectroscopy Infrared spectroscopy and Raman spectroscopy are other common spectroscopic techniques, usually used to obtain information about the structure of compounds or to identify compounds. Both are forms of vibrational spectroscopy. Isosbestic point – a wavelength where absorption does not change as the reaction proceeds. Important in kinetics measurements as a control. Near-infrared spectroscopy Rotational spectroscopy Slope spectroscopy Ultraviolet–visible spectroscopy of stereoisomers Vibrational spectroscopy References Absorption spectroscopy Scientific techniques
0.773348
0.996547
0.770677
Process philosophy
Process philosophy, also ontology of becoming, or processism, is an approach in philosophy that identifies processes, changes, or shifting relationships as the only real experience of everyday living. In opposition to the classical view of change as illusory (as argued by Parmenides) or accidental (as argued by Aristotle), process philosophy posits transient occasions of change or becoming as the only fundamental things of the ordinary everyday real world. Since the time of Plato and Aristotle, classical ontology has posited ordinary world reality as constituted of enduring substances, to which transient processes are ontologically subordinate, if they are not denied. If Socrates changes, becoming sick, Socrates is still the same (the substance of Socrates being the same), and change (his sickness) only glides over his substance: change is accidental, and devoid of primary reality, whereas the substance is essential. In physics, Ilya Prigogine distinguishes between the "physics of being" and the "physics of becoming". Process philosophy covers not just scientific intuitions and experiences, but can be used as a conceptual bridge to facilitate discussions among religion, philosophy, and science. Process philosophy is sometimes classified as closer to continental philosophy than analytic philosophy, because it is usually only taught in continental philosophy departments. However, other sources state that process philosophy should be placed somewhere in the middle between the poles of analytic versus continental methods in contemporary philosophy. History In ancient Greek thought Heraclitus proclaimed that the basic nature of all things is change. The quotation from Heraclitus appears in Plato's Cratylus twice; in 401d as: Ta onta ienai te panta kai menein ouden"All entities move and nothing remains still"and in 402a Panta chōrei kai ouden menei kai dis es ton auton potamon ouk an embaies "Everything changes and nothing remains still ... and ... you cannot step twice into the same stream" Heraclitus considered fire as the most fundamental element. "All things are an interchange for fire, and fire for all things, just like goods for gold and gold for goods." The following is an interpretation of Heraclitus's concepts into modern terms by Nicholas Rescher. "...reality is not a constellation of things at all, but one of processes. The fundamental 'stuff' of the world is not material substance, but volatile flux, namely 'fire', and all things are versions thereof (puros tropai). Process is fundamental: the river is not an object, but a continuing flow; the sun is not a thing, but an enduring fire. Everything is a matter of process, of activity, of change (panta rhei)." An early expression of this viewpoint is in Heraclitus's fragments. He posits strife, ἡ ἔρις (strife, conflict), as the underlying basis of all reality defined by change. The balance and opposition in strife were the foundations of change and stability in the flux of existence. Similarly, the philosopher, Empedocles, who proposed the four elements (earth, air, water, fire), sees all of these as subject to an eternal flux, between the two, oscillating forces of Love (or attraction) and Strife (repulsion). Nietzsche and Kierkegaard In his written works, Friedrich Nietzsche proposed what has been regarded as a philosophy of becoming that encompasses a "naturalistic doctrine intended to counter the metaphysical preoccupation with being", and a theory of "the incessant shift of perspectives and interpretations in a world that lacks a grounding essence". Søren Kierkegaard posed questions of individual becoming in Christianity which were opposed to the ancient Greek philosophers' focus on the indifferent becoming of the cosmos. However, he established as much of a focus on aporia as Heraclitus and others previously had, such as in his concept of the leap of faith which marks an individual becoming. As well as this, Kierkegaard opposed his philosophy to Hegel's system of philosophy approaching becoming and difference for what he saw as a "dialectical conflation of becoming and rationality", making the system take on the same trait of motionlessness as Parmenides' system. Twentieth century In the early twentieth century, the philosophy of mathematics was undertaken to develop mathematics as an airtight, axiomatic system in which every truth could be derived logically from a set of axioms. In the foundations of mathematics, this project is variously understood as logicism or as part of the formalist program of David Hilbert. Alfred North Whitehead and Bertrand Russell attempted to complete, or at least facilitate, this program with their seminal book Principia Mathematica, which purported to build a logically consistent set theory on which to found mathematics. After this, Whitehead extended his interest to natural science, which he held needed a deeper philosophical basis. He intuited that natural science was struggling to overcome a traditional ontology of timeless material substances that does not suit natural phenomena. According to Whitehead, material is more properly understood as 'process'. Whitehead's Process and Reality Alfred North Whitehead began teaching and writing on process and metaphysics when he joined Harvard University in 1924. In his book Science and the Modern World (1925), Whitehead noted that the human intuitions and experiences of science, aesthetics, ethics, and religion influence the worldview of a community, but that in the last several centuries science dominates Western culture. Whitehead sought a holistic, comprehensive cosmology that provides a systematic descriptive theory of the world which can be used for the diverse human intuitions gained through ethical, aesthetic, religious, and scientific experiences, and not just the scientific. In 1929, Whitehead produced the most famous work of process philosophy, Process and Reality, continuing the work begun by Hegel but describing a more complex and fluid dynamic ontology. Process thought describes truth as "movement" in and through substance (Hegelian truth), rather than substances as fixed concepts or "things" (Aristotelian truth). Since Whitehead, process thought is distinguished from Hegel in that it describes entities that arise or coalesce in becoming, rather than being simply dialectically determined from prior posited determinates. These entities are referred to as complexes of occasions of experience. It is also distinguished in being not necessarily conflictual or oppositional in operation. Process may be integrative, destructive or both together, allowing for aspects of interdependence, influence, and confluence, and addressing coherence in universal as well as particular developments, i.e., those aspects not befitting Hegel's system. Additionally, instances of determinate occasions of experience, while always ephemeral, are nonetheless seen as important to define the type and continuity of those occasions of experience that flow from or relate to them. Whitehead's influences were not restricted to philosophers or physicists or mathematicians. He was influenced by the French philosopher Henri Bergson (1859–1941), whom he credits along with William James and John Dewey in the preface to Process and Reality. Process metaphysics For Whitehead, metaphysics is about logical frameworks for the conduct of discussions of the character of the world. It is not directly and immediately about facts of nature, but only indirectly so, in that its task is to explicitly formulate the language and conceptual presuppositions that are used to describe the facts of nature. Whitehead thinks that discovery of previously unknown facts of nature can in principle call for reconstruction of metaphysics. The process metaphysics elaborated in Process and Reality posits an ontology which is based on the two kinds of existence of an entity, that of actual entity and that of abstract entity or abstraction, also called 'object'. Actual entity is a term coined by Whitehead to refer to the entities that really exist in the natural world. For Whitehead, actual entities are spatiotemporally extended events or processes. An actual entity is how something is happening, and how its happening is related to other actual entities. The actually existing world is a multiplicity of actual entities overlapping one another. The ultimate abstract principle of actual existence for Whitehead is creativity. Creativity is a term coined by Whitehead to show a power in the world that allows the presence of an actual entity, a new actual entity, and multiple actual entities. Creativity is the principle of novelty. It is manifest in what can be called 'singular causality'. This term may be contrasted with the term 'nomic causality'. An example of singular causation is that I woke this morning because my alarm clock rang. An example of nomic causation is that alarm clocks generally wake people in the morning. Aristotle recognizes singular causality as efficient causality. For Whitehead, there are many contributory singular causes for an event. A further contributory singular cause of my being awoken by my alarm clock this morning was that I was lying asleep near it till it rang. An actual entity is a general philosophical term for an utterly determinate and completely concrete individual particular of the actually existing world or universe of changeable entities considered in terms of singular causality, about which categorical statements can be made. Whitehead's most far-reaching and radical contribution to metaphysics is his invention of a better way of choosing the actual entities. Whitehead chooses a way of defining the actual entities that makes them all alike, qua actual entities, with a single exception. For example, for Aristotle, the actual entities were the substances, such as Socrates. Besides Aristotle's ontology of substances, another example of an ontology that posits actual entities is in the monads of Leibniz, which are said to be 'windowless'. Whitehead's actual entities For Whitehead's ontology of processes as defining the world, the actual entities exist as the only fundamental elements of reality. The actual entities are of two kinds, temporal and atemporal. With one exception, all actual entities for Whitehead are temporal and are occasions of experience (which are not to be confused with consciousness). An entity that people commonly think of as a simple concrete object, or that Aristotle would think of as a substance, is, in this ontology, considered to be a temporally serial composite of indefinitely many overlapping occasions of experience. A human being is thus composed of indefinitely many occasions of experience. The one exceptional actual entity is at once both temporal and atemporal: God. He is objectively immortal, as well as being immanent in the world. He is objectified in each temporal actual entity; but He is not an eternal object. The occasions of experience are of four grades. The first grade comprises processes in a physical vacuum such as the propagation of an electromagnetic wave or gravitational influence across empty space. The occasions of experience of the second grade involve just inanimate matter; "matter" being the composite overlapping of occasions of experience from the previous grade. The occasions of experience of the third grade involve living organisms. Occasions of experience of the fourth grade involve experience in the mode of presentational immediacy, which means more or less what are often called the qualia of subjective experience. So far as we know, experience in the mode of presentational immediacy occurs in only more evolved animals. That some occasions of experience involve experience in the mode of presentational immediacy is the one and only reason why Whitehead makes the occasions of experience his actual entities; for the actual entities must be of the ultimately general kind. Consequently, it is inessential that an occasion of experience have an aspect in the mode of presentational immediacy; occasions of the grades one, two, and three, lack that aspect. There is no mind-matter duality in this ontology, because "mind" is simply seen as an abstraction from an occasion of experience which has also a material aspect, which is of course simply another abstraction from it; thus the mental aspect and the material aspect are abstractions from one and the same concrete occasion of experience. The brain is part of the body, both being abstractions of a kind known as persistent physical objects, neither being actual entities. Though not recognized by Aristotle, there is biological evidence, written about by Galen, that the human brain is an essential seat of human experience in the mode of presentational immediacy. We may say that the brain has a material and a mental aspect, all three being abstractions from their indefinitely many constitutive occasions of experience, which are actual entities. Time, causality, and process Inherent in each actual entity is its respective dimension of time. Potentially, each Whiteheadean occasion of experience is causally consequential on every other occasion of experience that precedes it in time, and has as its causal consequences every other occasion of experience that follows it in time; thus it has been said that Whitehead's occasions of experience are 'all window', in contrast to Leibniz's 'windowless' monads. In time defined relative to it, each occasion of experience is causally influenced by prior occasions of experiences, and causally influences future occasions of experience. An occasion of experience consists of a process of prehending other occasions of experience, reacting to them. This is the process in process philosophy. Such process is never deterministic. Consequently, free will is essential and inherent to the universe. The causal outcomes obey the usual well-respected rule that the causes precede the effects in time. Some pairs of processes cannot be connected by cause-and-effect relations, and they are said to be spatially separated. This is in perfect agreement with the viewpoint of the Einstein theory of special relativity and with the Minkowski geometry of spacetime. It is clear that Whitehead respected these ideas, as may be seen for example in his 1919 book An Enquiry concerning the Principles of Natural Knowledge as well as in Process and Reality. In this view, time is relative to an inertial reference frame, different reference frames defining different versions of time. Atomicity The actual entities, the occasions of experience, are logically atomic in the sense that an occasion of experience cannot be cut and separated into two other occasions of experience. This kind of logical atomicity is perfectly compatible with indefinitely many spatio-temporal overlaps of occasions of experience. One can explain this kind of atomicity by saying that an occasion of experience has an internal causal structure that could not be reproduced in each of the two complementary sections into which it might be cut. Nevertheless, an actual entity can completely contain each of indefinitely many other actual entities. Another aspect of the atomicity of occasions of experience is that they do not change. An actual entity is what it is. An occasion of experience can be described as a process of change, but it is itself unchangeable. The atomicity of the actual entities is of a simply logical or philosophical kind, thoroughly different in concept from the natural kind of atomicity that describes the atoms of physics and chemistry. Topology Whitehead's theory of extension was concerned with the spatio-temporal features of his occasions of experience. Fundamental to both Newtonian and to quantum theoretical mechanics is the concept of momentum. The measurement of a momentum requires a finite spatiotemporal extent. Because it has no finite spatiotemporal extent, a single point of Minkowski space cannot be an occasion of experience, but is an abstraction from an infinite set of overlapping or contained occasions of experience, as explained in Process and Reality. Though the occasions of experience are atomic, they are not necessarily separate in extension, spatiotemporally, from one another. Indefinitely many occasions of experience can overlap in Minkowski space. Nexus is a term coined by Whitehead to show the network actual entity from the universe. In the universe of actual entities spread actual entity. Actual entities are clashing with each other and form other actual entities. The birth of an actual entity based on an actual entity, actual entities around him referred to as nexus. An example of a nexus of temporally overlapping occasions of experience is what Whitehead calls an enduring physical object, which corresponds closely with an Aristotelian substance. An enduring physical object has a temporally earliest and a temporally last member. Every member (apart from the earliest) of such a nexus is a causal consequence of the earliest member of the nexus, and every member (apart from the last) of such a nexus is a causal antecedent of the last member of the nexus. There are indefinitely many other causal antecedents and consequences of the enduring physical object, which overlap, but are not members, of the nexus. No member of the nexus is spatially separate from any other member. Within the nexus are indefinitely many continuous streams of overlapping nexūs, each stream including the earliest and the last member of the enduring physical object. Thus an enduring physical object, like an Aristotelian substance, undergoes changes and adventures during the course of its existence. In some contexts, especially in the theory of relativity in physics, the word 'event' refers to a single point in Minkowski or in Riemannian space-time. A point event is not a process in the sense of Whitehead's metaphysics. Neither is a countable sequence or array of points. A Whiteheadian process is most importantly characterized by extension in space-time, marked by a continuum of uncountably many points in a Minkowski or a Riemannian space-time. The word 'event', indicating a Whiteheadian actual entity, is not being used in the sense of a point event. Whitehead's abstractions Whitehead's abstractions are conceptual entities that are abstracted from or derived from and founded upon his actual entities. Abstractions are themselves not actual entities. They are the only entities that can be real but are not actual entities. This statement is one form of Whitehead's 'ontological principle'. An abstraction is a conceptual entity that refers to more than one single actual entity. Whitehead's ontology refers to importantly structured collections of actual entities as nexuses of actual entities. Collection of actual entities into a nexus emphasizes some aspect of those entities, and that emphasis is an abstraction, because it means that some aspects of the actual entities are emphasized or dragged away from their actuality, while other aspects are de-emphasized or left out or left behind. 'Eternal object' is a term coined by Whitehead. It is an abstraction, a possibility, or pure potential. It can be ingredient into some actual entity. It is a principle that can give a particular form to an actual entity. Whitehead admitted indefinitely many eternal objects. An example of an eternal object is a number, such as the number 'two'. Whitehead held that eternal objects are abstractions of a very high degree of abstraction. Many abstractions, including eternal objects, are potential ingredients of processes. Relation between actual entities and abstractions stated in the ontological principle For Whitehead, besides its temporal generation by the actual entities which are its contributory causes, a process may be considered as a concrescence of abstract ingredient eternal objects. God enters into every temporal actual entity. Whitehead's ontological principle is that whatever reality pertains to an abstraction is derived from the actual entities upon which it is founded or of which it is comprised. Causation and concrescence of a process Concrescence is a term coined by Whitehead to show the process of jointly forming an actual entity that was without form, but about to manifest itself into an entity Actual full (satisfaction) based on datums or for information on the universe. The process of forming an actual entity is the case based on the existing datums. Concretion process can be regarded as subjectification process. Datum is a term coined by Whitehead to show the different variants of information possessed by actual entity. In process philosophy, datum is obtained through the events of concrescence. Every actual entity has a variety of datum. Commentary on Whitehead and on process philosophy Whitehead is not an idealist in the strict sense. Whitehead's thought may be regarded as related to the idea of panpsychism (also known as panexperientialism, because of Whitehead's emphasis on experience). On God Whitehead's philosophy is complex, subtle, and nuanced regarding the concept of "God". In Process and Reality Corrected Edition (1978), wherein regarding "God" the editors elaborate Whitehead's conception. He is the unconditioned actuality of conceptual feeling at the base of things; so that by reason of this primordial actuality, there is an order in the relevance of eternal objects to the process of creation. [...] The particularities of the actual world presuppose it; while it merely presupposes the general metaphysical character of creative advance, of which it is the primordial exemplification. [emphasis in original] Process philosophy, might be considered according to some theistic forms of religion to give God a special place in the universe of occasions of experience. Regarding Whitehead's use of the term "occasions" in reference to "God", Process and Reality Corrected Edition explains: 'Actual entities' - also termed 'actual occasions' - are the final real things of which the world is made up. There is no going behind actual entities to find anything more real. They differ among themselves: God is an actual entity, and so is the most trivial puff of existence in far-off empty space. But, though there are gradations of importance, and diversities of function, yet in the principles which actuality exemplifies all are on the same level. The final facts are, all alike, actual entities; and these actual entities are drops of experience, complex and interdependent. It also can be assumed within some forms of theology that a God encompasses all the other occasions of experience but also transcends them and this might lead to it being argued that Whitehead endorses some form of panentheism. Since, it is argued theologically, that "free will" is inherent to the nature of the universe, Whitehead's God is not omnipotent in Whitehead's metaphysics. God's role is to offer enhanced occasions of experience. God participates in the evolution of the universe by offering possibilities, which may be accepted or rejected. Whitehead's thinking here has given rise to process theology, whose prominent advocates include Charles Hartshorne, John B. Cobb, Jr., and Hans Jonas, who was also influenced by the non-theological philosopher Martin Heidegger. However, other process philosophers have questioned Whitehead's theology, seeing it as a regressive Platonism. Whitehead enumerated three essential natures of God. The primordial nature of God consists of all potentialities of existence for actual occasions, which Whitehead dubbed eternal objects. God can offer possibilities by ordering the relevance of eternal objects. The consequent nature of God prehends everything that happens in reality. As such, God experiences all of reality in a sentient manner. The last nature is the superjective. This is the way in which God's synthesis becomes a sense-datum for other actual entities. In some sense, God is prehended by existing actual entities. Legacy and applications Biology In plant morphology, Rolf Sattler developed a process morphology (dynamic morphology) that overcomes the structure/process (or structure/function) dualism that is commonly taken for granted in biology. According to process morphology, structures such as leaves of plants do not have processes, they are processes. In evolution and in development, the nature of the changes of biological objects are considered by many authors to be more radical than in physical systems. In biology, changes are not just changes of state in a pre-given space, instead the space and more generally the mathematical structures required to understand object change over time. Ecology With its perspective that everything is interconnected, that all life has value, and that non-human entities are also experiencing subjects, process philosophy has played an important role in discourse on ecology and sustainability. The first book to connect process philosophy with environmental ethics was John B. Cobb, Jr.'s 1971 work, Is It Too Late: A Theology of Ecology. In a more recent book (2018) edited by John B. Cobb, Jr. and Wm. Andrew Schwartz, Putting Philosophy to Work: Toward an Ecological Civilization contributors explicitly explore the ways in which process philosophy can be put to work to address the most urgent issues facing our world today, by contributing to a transition toward an ecological civilization. That book emerged from the largest international conference held on the theme of ecological civilization (Seizing an Alternative: Toward an Ecological Civilization) which was organized by the Center for Process Studies in June 2015. The conference brought together roughly 2,000 participants from around the world and featured such leaders in the environmental movement as Bill McKibben, Vandana Shiva, John B. Cobb, Jr., Wes Jackson, and Sheri Liao. The notion of ecological civilization is often affiliated with the process philosophy of Alfred North Whitehead—especially in China. Mathematics In the philosophy of mathematics, some of Whitehead's ideas re-emerged in combination with cognitivism as the cognitive science of mathematics and embodied mind theses. Somewhat earlier, exploration of mathematical practice and quasi-empiricism in mathematics from the 1950s to 1980s had sought alternatives to metamathematics in social behaviours around mathematics itself: for instance, Paul Erdős's simultaneous belief in Platonism and a single "big book" in which all proofs existed, combined with his personal obsessive need or decision to collaborate with the widest possible number of other mathematicians. The process, rather than the outcomes, seemed to drive his explicit behaviour and odd use of language, as if the synthesis of Erdős and collaborators in seeking proofs, creating sense-datum for other mathematicians, was itself the expression of a divine will. Certainly, Erdős behaved as if nothing else in the world mattered, including money or love, as emphasized in his biography The Man Who Loved Only Numbers. Medicine Several fields of science and especially medicine seem to make liberal use of ideas in process philosophy, notably the theory of pain and healing of the late 20th century. The philosophy of medicine began to deviate somewhat from scientific method and an emphasis on repeatable results in the very late 20th century by embracing population thinking, and a more pragmatic approach to issues in public health, environmental health, and especially mental health. In this latter field, R. D. Laing, Thomas Szasz, and Michel Foucault were instrumental in moving medicine away from emphasis on "cures" and towards concepts of individuals in balance with their society, both of which are changing, and against which no benchmarks or finished "cures" were very likely to be measurable. Psychology In psychology, the subject of imagination was again explored more extensively since Whitehead, and the question of feasibility or "eternal objects" of thought became central to the impaired theory of mind explorations that framed postmodern cognitive science. A biological understanding of the most eternal object, that being the emerging of similar but independent cognitive apparatus, led to an obsession with the process "embodiment", that being, the emergence of these cognitions. Like Whitehead's God, especially as elaborated in J. J. Gibson's perceptual psychology emphasizing affordances, by ordering the relevance of eternal objects (especially the cognitions of other such actors), the world becomes. Or, it becomes simple enough for human beings to begin to make choices, and to prehend what happens as a result. These experiences may be summed in some sense but can only approximately be shared, even among very similar cognitions with identical DNA. An early explorer of this view was Alan Turing who sought to prove the limits of expressive complexity of human genes in the late 1940s, to put bounds on the complexity of human intelligence and so assess the feasibility of artificial intelligence emerging. Since 2000, Process Psychology has progressed as an independent academic and therapeutic discipline: In 2000, Michel Weber created the Whitehead Psychology Nexus: an open forum dedicated to the cross-examination of Alfred North Whitehead's process philosophy and the various facets of the contemporary psychological field. Philosophy of movement The philosophy of movement is a sub-area within process philosophy that treats processes as movements. It studies processes as flows, folds, and fields in historical patterns of centripetal, centrifugal, tensional, and elastic motion. See Thomas Nail's philosophy of movement and process materialism. See also Concepts Actual idealism Anicca, the Buddhist doctrine that all is "transient, evanescent, inconstant" Panta rhei, Heraclitus's concept that "everything flows" Dialectic Dialectical monism Elisionism Holomovement Pancreativism Salishan languages#Nounlessness Speculative realism People John B. Cobb David Ray Griffin Arthur Peacocke Michel Weber Arran Gare Joseph A. Bracken Milič Čapek Wilmon Henry Sheldon Thomas Nail Iain McGilchrist Eugene Gendlin Rein Raud Charles Hartshorne References External links Academia pages of the Center for Philosophical Practice. Whitehead Research Project Process and Reality. Part V. Final Interpretation Wolfgang Sohst: Prozessontologie. Ein systematischer Entwurf der Entstehung von Existenz (Berlin 2009) Critique of a Metaphysics of Process (Antwerp 2012) Holism Religion and science Subfields of metaphysics Alfred North Whitehead
0.776206
0.992871
0.770672
Atomic, molecular, and optical physics
Atomic, molecular, and optical physics (AMO) is the study of matter–matter and light–matter interactions, at the scale of one or a few atoms and energy scales around several electron volts. The three areas are closely interrelated. AMO theory includes classical, semi-classical and quantum treatments. Typically, the theory and applications of emission, absorption, scattering of electromagnetic radiation (light) from excited atoms and molecules, analysis of spectroscopy, generation of lasers and masers, and the optical properties of matter in general, fall into these categories. Atomic and molecular physics Atomic physics is the subfield of AMO that studies atoms as an isolated system of electrons and an atomic nucleus, while molecular physics is the study of the physical properties of molecules. The term atomic physics is often associated with nuclear power and nuclear bombs, due to the synonymous use of atomic and nuclear in standard English. However, physicists distinguish between atomic physics — which deals with the atom as a system consisting of a nucleus and electrons — and nuclear physics, which considers atomic nuclei alone. The important experimental techniques are the various types of spectroscopy. Molecular physics, while closely related to atomic physics, also overlaps greatly with theoretical chemistry, physical chemistry and chemical physics. Both subfields are primarily concerned with electronic structure and the dynamical processes by which these arrangements change. Generally this work involves using quantum mechanics. For molecular physics, this approach is known as quantum chemistry. One important aspect of molecular physics is that the essential atomic orbital theory in the field of atomic physics expands to the molecular orbital theory. Molecular physics is concerned with atomic processes in molecules, but it is additionally concerned with effects due to the molecular structure. Additionally to the electronic excitation states which are known from atoms, molecules are able to rotate and to vibrate. These rotations and vibrations are quantized; there are discrete energy levels. The smallest energy differences exist between different rotational states, therefore pure rotational spectra are in the far infrared region (about 30 - 150 μm wavelength) of the electromagnetic spectrum. Vibrational spectra are in the near infrared (about 1 - 5 μm) and spectra resulting from electronic transitions are mostly in the visible and ultraviolet regions. From measuring rotational and vibrational spectra properties of molecules like the distance between the nuclei can be calculated. As with many scientific fields, strict delineation can be highly contrived and atomic physics is often considered in the wider context of atomic, molecular, and optical physics. Physics research groups are usually so classified. Optical physics Optical physics is the study of the generation of electromagnetic radiation, the properties of that radiation, and the interaction of that radiation with matter, especially its manipulation and control. It differs from general optics and optical engineering in that it is focused on the discovery and application of new phenomena. There is no strong distinction, however, between optical physics, applied optics, and optical engineering, since the devices of optical engineering and the applications of applied optics are necessary for basic research in optical physics, and that research leads to the development of new devices and applications. Often the same people are involved in both the basic research and the applied technology development, for example the experimental demonstration of electromagnetically induced transparency by S. E. Harris and of slow light by Harris and Lene Vestergaard Hau. Researchers in optical physics use and develop light sources that span the electromagnetic spectrum from microwaves to X-rays. The field includes the generation and detection of light, linear and nonlinear optical processes, and spectroscopy. Lasers and laser spectroscopy have transformed optical science. Major study in optical physics is also devoted to quantum optics and coherence, and to femtosecond optics. In optical physics, support is also provided in areas such as the nonlinear response of isolated atoms to intense, ultra-short electromagnetic fields, the atom-cavity interaction at high fields, and quantum properties of the electromagnetic field. Other important areas of research include the development of novel optical techniques for nano-optical measurements, diffractive optics, low-coherence interferometry, optical coherence tomography, and near-field microscopy. Research in optical physics places an emphasis on ultrafast optical science and technology. The applications of optical physics create advancements in communications, medicine, manufacturing, and even entertainment. History One of the earliest steps towards atomic physics was the recognition that matter was composed of atoms, in modern terms the basic unit of a chemical element. This theory was developed by John Dalton in the 18th century. At this stage, it wasn't clear what atoms were - although they could be described and classified by their observable properties in bulk; summarized by the developing periodic table, by John Newlands and Dmitri Mendeleyev around the mid to late 19th century. Later, the connection between atomic physics and optical physics became apparent, by the discovery of spectral lines and attempts to describe the phenomenon - notably by Joseph von Fraunhofer, Fresnel, and others in the 19th century. From that time to the 1920s, physicists were seeking to explain atomic spectra and blackbody radiation. One attempt to explain hydrogen spectral lines was the Bohr atom model. Experiments including electromagnetic radiation and matter - such as the photoelectric effect, Compton effect, and spectra of sunlight the due to the unknown element of Helium, the limitation of the Bohr model to Hydrogen, and numerous other reasons, lead to an entirely new mathematical model of matter and light: quantum mechanics. Classical oscillator model of matter Early models to explain the origin of the index of refraction treated an electron in an atomic system classically according to the model of Paul Drude and Hendrik Lorentz. The theory was developed to attempt to provide an origin for the wavelength-dependent refractive index n of a material. In this model, incident electromagnetic waves forced an electron bound to an atom to oscillate. The amplitude of the oscillation would then have a relationship to the frequency of the incident electromagnetic wave and the resonant frequencies of the oscillator. The superposition of these emitted waves from many oscillators would then lead to a wave which moved more slowly. Early quantum model of matter and light Max Planck derived a formula to describe the electromagnetic field inside a box when in thermal equilibrium in 1900. His model consisted of a superposition of standing waves. In one dimension, the box has length L, and only sinusoidal waves of wavenumber can occur in the box, where n is a positive integer (mathematically denoted by ). The equation describing these standing waves is given by: . where E0 is the magnitude of the electric field amplitude, and E is the magnitude of the electric field at position x. From this basic, Planck's law was derived. In 1911, Ernest Rutherford concluded, based on alpha particle scattering, that an atom has a central pointlike proton. He also thought that an electron would be still attracted to the proton by Coulomb's law, which he had verified still held at small scales. As a result, he believed that electrons revolved around the proton. Niels Bohr, in 1913, combined the Rutherford model of the atom with the quantisation ideas of Planck. Only specific and well-defined orbits of the electron could exist, which also do not radiate light. In jumping orbit the electron would emit or absorb light corresponding to the difference in energy of the orbits. His prediction of the energy levels was then consistent with observation. These results, based on a discrete set of specific standing waves, were inconsistent with the continuous classical oscillator model. Work by Albert Einstein in 1905 on the photoelectric effect led to the association of a light wave of frequency with a photon of energy . In 1917 Einstein created an extension to Bohrs model by the introduction of the three processes of stimulated emission, spontaneous emission and absorption (electromagnetic radiation). Modern treatments The largest steps towards the modern treatment was the formulation of quantum mechanics with the matrix mechanics approach by Werner Heisenberg and the discovery of the Schrödinger equation by Erwin Schrödinger. There are a variety of semi-classical treatments within AMO. Which aspects of the problem are treated quantum mechanically and which are treated classically is dependent on the specific problem at hand. The semi-classical approach is ubiquitous in computational work within AMO, largely due to the large decrease in computational cost and complexity associated with it. For matter under the action of a laser, a fully quantum mechanical treatment of the atomic or molecular system is combined with the system being under the action of a classical electromagnetic field. Since the field is treated classically it can not deal with spontaneous emission. This semi-classical treatment is valid for most systems, particular those under the action of high intensity laser fields. The distinction between optical physics and quantum optics is the use of semi-classical and fully quantum treatments respectively. Within collision dynamics and using the semi-classical treatment, the internal degrees of freedom may be treated quantum mechanically, whilst the relative motion of the quantum systems under consideration are treated classically. When considering medium to high speed collisions, the nuclei can be treated classically while the electron is treated quantum mechanically. In low speed collisions the approximation fails. Classical Monte-Carlo methods for the dynamics of electrons can be described as semi-classical in that the initial conditions are calculated using a fully quantum treatment, but all further treatment is classical. Isolated atoms and molecules Atomic, Molecular and Optical physics frequently considers atoms and molecules in isolation. Atomic models will consist of a single nucleus that may be surrounded by one or more bound electrons, whilst molecular models are typically concerned with molecular hydrogen and its molecular hydrogen ion. It is concerned with processes such as ionization, above threshold ionization and excitation by photons or collisions with atomic particles. While modelling atoms in isolation may not seem realistic, if one considers molecules in a gas or plasma then the time-scales for molecule-molecule interactions are huge in comparison to the atomic and molecular processes that we are concerned with. This means that the individual molecules can be treated as if each were in isolation for the vast majority of the time. By this consideration atomic and molecular physics provides the underlying theory in plasma physics and atmospheric physics even though both deal with huge numbers of molecules. Electronic configuration Electrons form notional shells around the nucleus. These are naturally in a ground state but can be excited by the absorption of energy from light (photons), magnetic fields, or interaction with a colliding particle (typically other electrons). Electrons that populate a shell are said to be in a bound state. The energy necessary to remove an electron from its shell (taking it to infinity) is called the binding energy. Any quantity of energy absorbed by the electron in excess of this amount is converted to kinetic energy according to the conservation of energy. The atom is said to have undergone the process of ionization. In the event that the electron absorbs a quantity of energy less than the binding energy, it may transition to an excited state or to a virtual state. After a statistically sufficient quantity of time, an electron in an excited state will undergo a transition to a lower state via spontaneous emission. The change in energy between the two energy levels must be accounted for (conservation of energy). In a neutral atom, the system will emit a photon of the difference in energy. However, if the lower state is in an inner shell, a phenomenon known as the Auger effect may take place where the energy is transferred to another bound electrons causing it to go into the continuum. This allows one to multiply ionize an atom with a single photon. There are strict selection rules as to the electronic configurations that can be reached by excitation by light—however there are no such rules for excitation by collision processes. See also Born–Oppenheimer approximation Frequency doubling Diffraction Hyperfine structure Interferometry Isomeric shift Metamaterial cloaking Molecular energy state Molecular modeling Nanotechnology Negative index metamaterials Nonlinear optics Optical engineering Photon polarization Quantum chemistry Quantum optics Rigid rotor Spectroscopy Superlens Stationary state Transition of state Notes References Solid State Physics (2nd Edition), J.R. Hook, H.E. Hall, Manchester Physics Series, John Wiley & Sons, 2010, Light and Matter: Electromagnetism, Optics, Spectroscopy and Lasers, Y.B. Band, John Wiley & Sons, 2010, The Light Fantastic – Introduction to Classic and Quantum Optics, I.R. Kenyon, Oxford University Press, 2008, Handbook of atomic, molecular, and optical physics, Editor: Gordon Drake, Springer, Various authors, 1996, External links ScienceDirect - Advances In Atomic, Molecular, and Optical Physics Journal of Physics B: Atomic, Molecular and Optical Physics Institutions American Physical Society - Division of Atomic, Molecular & Optical Physics European Physical Society - Atomic, Molecular & Optical Physics Division National Science Foundation - Atomic, Molecular and Optical Physics MIT-Harvard Center for Ultracold Atoms Stanford QFARM Initiative for Quantum Science & Enginneering JILA - Atomic and Molecular Physics Joint Quantum Institute at University of Maryland and NIST ORNL Physics Division Queen's University Belfast - Center for Theoretical, Atomic, Molecular and Optical Physics, University of California, Berkeley - Atomic, Molecular and Optical Physics
0.784742
0.982046
0.770653
Isomer
In chemistry, isomers are molecules or polyatomic ions with identical molecular formula – that is, the same number of atoms of each element – but distinct arrangements of atoms in space. Isomerism refers to the existence or possibility of isomers. Isomers do not necessarily share similar chemical or physical properties. Two main forms of isomerism are structural (or constitutional) isomerism, in which bonds between the atoms differ; and stereoisomerism or (spatial isomerism), in which the bonds are the same but the relative positions of the atoms differ. Isomeric relationships form a hierarchy. Two chemicals might be the same constitutional isomer, but upon deeper analysis be stereoisomers of each other. Two molecules that are the same stereoisomer as each other might be in different conformational forms or be different isotopologues. The depth of analysis depends on the field of study or the chemical and physical properties of interest. The English word "isomer" is a back-formation from "isomeric", which was borrowed through German isomerisch from Swedish ; which in turn was coined from Greek ἰσόμερoς , with roots = "equal", = "part". Structural isomers Structural isomers have the same number of atoms of each element (hence the same molecular formula), but the atoms are connected in distinct ways. Example: For example, there are three distinct compounds with the molecular formula C3H8O: The first two isomers shown of C3H8O are propanols, that is, alcohols derived from propane. Both have a chain of three carbon atoms connected by single bonds, with the remaining carbon valences being filled by seven hydrogen atoms and by a hydroxyl group -OH comprising the oxygen atom bound to a hydrogen atom. These two isomers differ on which carbon the hydroxyl is bound to: either to an extremity of the carbon chain propan-1-ol (1-propanol, n-propyl alcohol, n-propanol; I) or to the middle carbon propan-2-ol (2-propanol, isopropyl alcohol, isopropanol; II). These can be described by the condensed structural formulas H3C-CH2-CH2OH and H3C-CH(OH)-CH3. The third isomer of C3H8O is the ether methoxyethane (ethyl-methyl-ether; III). Unlike the other two, it has the oxygen atom connected to two carbons, and all eight hydrogens bonded directly to carbons. It can be described by the condensed formula H3C-CH2-O-CH3. The alcohol "3-propanol" is not another isomer, since the difference between it and 1-propanol is not real; it is only the result of an arbitrary choice in the direction of numbering the carbons along the chain. For the same reason, "ethoxymethane" is the same molecule as methoxyethane, not another isomer. 1-Propanol and 2-propanol are examples of positional isomers, which differ by the position at which certain features, such as double bonds or functional groups, occur on a "parent" molecule (propane, in that case). Example: There are also three structural isomers of the hydrocarbon C3H4: In two of the isomers, the three carbon atoms are connected in an open chain, but in one of them (propadiene or allene; I) the carbons are connected by two double bonds, while in the other (propyne or methylacetylene; II) they are connected by a single bond and a triple bond. In the third isomer (cyclopropene; III) the three carbons are connected into a ring by two single bonds and a double bond. In all three, the remaining valences of the carbon atoms are satisfied by the four hydrogens. Again, note that there is only one structural isomer with a triple bond, because the other possible placement of that bond is just drawing the three carbons in a different order. For the same reason, there is only one cyclopropene, not three. Tautomers Tautomers are structural isomers which readily interconvert, so that two or more species co-exist in equilibrium such as H-X-Y=Z <=> X=Y-Z-H. Important examples are keto-enol tautomerism and the equilibrium between neutral and zwitterionic forms of an amino acid. Resonance forms The structure of some molecules is sometimes described as a resonance between several apparently different structural isomers. The classical example is 1,2-dimethylbenzene (o-xylene), which is often described as a mix of the two apparently distinct structural isomers: However, neither of these two structures describes a real compound; they are fictions devised as a way to describe (by their "averaging" or "resonance") the actual delocalized bonding of o-xylene, which is the single isomer of C8H10 with a benzene core and two methyl groups in adjacent positions. Stereoisomers Stereoisomers have the same atoms or isotopes connected by bonds of the same type, but differ in their shapes – the relative positions of those atoms in space – apart from rotations and translations. In theory, one can imagine any arrangement in space of the atoms of a molecule or ion to be gradually changed to any other arrangement in infinitely many ways, by moving each atom along an appropriate path. However, changes in the positions of atoms will generally change the internal energy of a molecule, which is determined by the angles between bonds in each atom and by the distances between atoms (whether they are bonded or not). A conformational isomer is an arrangement of the atoms of the molecule or ion for which the internal energy is a local minimum; that is, an arrangement such that any small changes in the positions of the atoms will increase the internal energy, and hence result in forces that tend to push the atoms back to the original positions. Changing the shape of the molecule from such an energy minimum A to another energy minimum B will therefore require going through configurations that have higher energy than A and B. That is, a conformation isomer is separated from any other isomer by an energy barrier: the amount that must be temporarily added to the internal energy of the molecule in order to go through all the intermediate conformations along the "easiest" path (the one that minimizes that amount). A classic example of conformational isomerism is cyclohexane. Alkanes generally have minimum energy when the C-C-C angles are close to 110 degrees. Conformations of the cyclohexane molecule with all six carbon atoms on the same plane have a higher energy, because some or all the C-C-C angles must be far from that value (120 degrees for a regular hexagon). Thus the conformations which are local energy minima have the ring twisted in space, according to one of two patterns known as chair (with the carbons alternately above and below their mean plane) and boat (with two opposite carbons above the plane, and the other four below it). If the energy barrier between two conformational isomers is low enough, it may be overcome by the random inputs of thermal energy that the molecule gets from interactions with the environment or from its own vibrations. In that case, the two isomers may as well be considered a single isomer, depending on the temperature and the context. For example, the two conformations of cyclohexane convert to each other quite rapidly at room temperature (in the liquid state), so that they are usually treated as a single isomer in chemistry. In some cases, the barrier can be crossed by quantum tunneling of the atoms themselves. This last phenomenon prevents the separation of stereoisomers of fluorochloroamine NHFCl or hydrogen peroxide H2O2, because the two conformations with minimum energy interconvert in a few picoseconds even at very low temperatures. Conversely, the energy barrier may be so high that the easiest way to overcome it would require temporarily breaking and then reforming one or more bonds of the molecule. In that case, the two isomers usually are stable enough to be isolated and treated as distinct substances. These isomers are then said to be different configurational isomers or "configurations" of the molecule, not just two different conformations. (However, one should be aware that the terms "conformation" and "configuration" are largely synonymous outside of chemistry, and their distinction may be controversial even among chemists.) Interactions with other molecules of the same or different compounds (for example, through hydrogen bonds) can significantly change the energy of conformations of a molecule. Therefore, the possible isomers of a compound in solution or in its liquid and solid phases many be very different from those of an isolated molecule in vacuum. Even in the gas phase, some compounds like acetic acid will exist mostly in the form of dimers or larger groups of molecules, whose configurations may be different from those of the isolated molecule. Enantiomers Two compounds are said to be enantiomers if their molecules are mirror images of each other, that cannot be made to coincide only by rotations or translations – like a left hand and a right hand. The two shapes are said to be chiral. A classical example is bromochlorofluoromethane (CHFClBr). The two enantiomers can be distinguished, for example, by whether the path F->Cl->Br turns clockwise or counterclockwise as seen from the hydrogen atom. In order to change one conformation to the other, at some point those four atoms would have to lie on the same plane – which would require severely straining or breaking their bonds to the carbon atom. The corresponding energy barrier between the two conformations is so high that there is practically no conversion between them at room temperature, and they can be regarded as different configurations. The compound chlorofluoromethane CH2ClF, in contrast, is not chiral: the mirror image of its molecule is also obtained by a half-turn about a suitable axis. Another example of a chiral compound is 2,3-pentadiene H3C-CH=C=CH-CH3 a hydrocarbon that contains two overlapping double bonds. The double bonds are such that the three middle carbons are in a straight line, while the first three and last three lie on perpendicular planes. The molecule and its mirror image are not superimposable, even though the molecule has an axis of symmetry. The two enantiomers can be distinguished, for example, by the right-hand rule. This type of isomerism is called axial isomerism. Enantiomers behave identically in chemical reactions, except when reacted with chiral compounds or in the presence of chiral catalysts, such as most enzymes. For this latter reason, the two enantiomers of most chiral compounds usually have markedly different effects and roles in living organisms. In biochemistry and food science, the two enantiomers of a chiral molecule – such as glucose – are usually identified, and treated as very different substances. Each enantiomer of a chiral compound typically rotates the plane of polarized light that passes through it. The rotation has the same magnitude but opposite senses for the two isomers, and can be a useful way of distinguishing and measuring their concentration in a solution. For this reason, enantiomers were formerly called "optical isomers". However, this term is ambiguous and is discouraged by the IUPAC. Stereoisomers that are not enantiomers are called diastereomers. Some diastereomers may contain chiral center, some not. Some enantiomer pairs (such as those of trans-cyclooctene) can be interconverted by internal motions that change bond lengths and angles only slightly. Other pairs (such as CHFClBr) cannot be interconverted without breaking bonds, and therefore are different configurations. Cis-trans isomerism A double bond between two carbon atoms forces the remaining four bonds (if they are single) to lie on the same plane, perpendicular to the plane of the bond as defined by its π orbital. If the two bonds on each carbon connect to different atoms, two distinct conformations are possible, that differ from each other by a twist of 180 degrees of one of the carbons about the double bond. The classical example is dichloroethene C2H2Cl2, specifically the structural isomer Cl-HC=CH-Cl that has one chlorine bonded to each carbon. It has two conformational isomers, with the two chlorines on the same side or on opposite sides of the double bond's plane. They are traditionally called cis (from Latin meaning "on this side of") and trans ("on the other side of"), respectively; or Z and E in the IUPAC recommended nomenclature. Conversion between these two forms usually requires temporarily breaking bonds (or turning the double bond into a single bond), so the two are considered different configurations of the molecule. More generally, cis–trans isomerism (formerly called "geometric isomerism") occurs in molecules where the relative orientation of two distinguishable functional groups is restricted by a somewhat rigid framework of other atoms. For example, in the cyclic alcohol inositol (CHOH)6 (a six-fold alcohol of cyclohexane), the six-carbon cyclic backbone largely prevents the hydroxyl -OH and the hydrogen -H on each carbon from switching places. Therefore, one has different configurational isomers depending on whether each hydroxyl is on "this side" or "the other side" of the ring's mean plane. Discounting isomers that are equivalent under rotations, there are nine isomers that differ by this criterion, and behave as different stable substances (two of them being enantiomers of each other). The most common one in nature (myo-inositol) has the hydroxyls on carbons 1, 2, 3 and 5 on the same side of that plane, and can therefore be called cis-1,2,3,5-trans-4,6-cyclohexanehexol. And each of these cis-trans isomers can possibly have stable "chair" or "boat" conformations (although the barriers between these are significantly lower than those between different cis-trans isomers). Cis and trans isomers also occur in inorganic coordination compounds, such as square planar MX2Y2 complexes and octahedral MX4Y2 complexes. For more complex organic molecules, the cis and trans labels are ambiguous. The IUPAC recommends a more precise labeling scheme, based on the CIP priorities for the bonds at each carbon atom. Centers with non-equivalent bonds More generally, atoms or atom groups that can form three or more non-equivalent single bonds (such as the transition metals in coordination compounds) may give rise to multiple stereoisomers when different atoms or groups are attached at those positions. The same is true if a center with six or more equivalent bonds has two or more substituents. For instance, in the compound PF4Cl, the bonds from the phosphorus atom to the five halogens have approximately trigonal bipyramidal geometry. Thus two stereoisomers with that formula are possible, depending on whether the chlorine atom occupies one of the two "axial" positions, or one of the three "equatorial" positions. For the compound PF3Cl2, three isomers are possible, with zero, one, or two chlorines in the axial positions. As another example, a complex with a formula like MX3Y3, where the central atom M forms six bonds with octahedral geometry, has at least two facial–meridional isomers, depending on whether the three X bonds (and thus also the three Y bonds) are directed at the three corners of one face of the octahedron (fac isomer), or lie on the same equatorial or "meridian" plane of it (mer isomer). Rotamers and atropisomers Two parts of a molecule that are connected by just one single bond can rotate about that bond. While the bond itself is indifferent to that rotation, attractions and repulsions between the atoms in the two parts normally cause the energy of the whole molecule to vary (and possibly also the two parts to deform) depending on the relative angle of rotation φ between the two parts. Then there will be one or more special values of φ for which the energy is at a local minimum. The corresponding conformations of the molecule are called rotational isomers or rotamers. Thus, for example, in an ethane molecule H3C-CH3, all the bond angles and length are narrowly constrained, except that the two methyl groups can independently rotate about the C-C axis. Thus, even if those angles and distances are assumed fixed, there are infinitely many conformations for the ethane molecule, that differ by the relative angle φ of rotation between the two groups. The feeble repulsion between the hydrogen atoms in the two methyl groups causes the energy to minimized for three specific values of φ, 120° apart. In those configurations, the six planes H-C-C or C-C-H are 60° apart. Discounting rotations of the whole molecule, that configuration is a single isomer – the so-called staggered conformation. Rotation between the two halves of the molecule 1,2-dichloroethane (ClH2C-CH2Cl also has three local energy minima, but they have different energies due to differences between the H-H, Cl-Cl, and H-Cl interactions. There are therefore three rotamers: a trans isomer where the two chlorines are on the same plane as the two carbons, but with oppositely directed bonds; and two gauche isomers, mirror images of each other, where the two -CH2Cl groups are rotated about 109° from that position. The computed energy difference between trans and gauche is ~1.5 kcal/mol, the barrier for the ~109° rotation from trans to gauche is ~5 kcal/mol, and that of the ~142° rotation from one gauche to its enantiomer is ~8 kcal/mol. The situation for butane is similar, but with sightly lower gauche energies and barriers. If the two parts of the molecule connected by a single bond are bulky or charged, the energy barriers may be much higher. For example, in the compound biphenyl – two phenyl groups connected by a single bond – the repulsion between hydrogen atoms closest to the central single bond gives the fully planar conformation, with the two rings on the same plane, a higher energy than conformations where the two rings are skewed. In the gas phase, the molecule has therefore at least two rotamers, with the ring planes twisted by ±47°, which are mirror images of each other. The barrier between them is rather low (~8 kJ/mol). This steric hindrance effect is more pronounced when those four hydrogens are replaced by larger atoms or groups, like chlorines or carboxyls. If the barrier is high enough for the two rotamers to be separated as stable compounds at room temperature, they are called atropisomers. Topoisomers Large molecules may have isomers that differ by the topology of their overall arrangement in space, even if there is no specific geometric constraint that separate them. For example, long chains may be twisted to form topologically distinct knots, with interconversion prevented by bulky substituents or cycle closing (as in circular DNA and RNA plasmids). Some knots may come in mirror-image enantiomer pairs. Such forms are called topological isomers or topoisomers. Also, two or more such molecules may be bound together in a catenane by such topological linkages, even if there is no chemical bond between them. If the molecules are large enough, the linking may occur in multiple topologically distinct ways, constituting different isomers. Cage compounds, such as helium enclosed in dodecahedrane (He@) and carbon peapods, are a similar type of topological isomerism involving molecules with large internal voids with restricted or no openings. Isotopes and spin Isotopomers Different isotopes of the same element can be considered as different kinds of atoms when enumerating isomers of a molecule or ion. The replacement of one or more atoms by their isotopes can create multiple structural isomers and/or stereoisomers from a single isomer. For example, replacing two atoms of common hydrogen (^1 H ) by deuterium (^2 H , or D) on an ethane molecule yields two distinct structural isomers, depending on whether the substitutions are both on the same carbon (1,1-dideuteroethane, HD2C-CH3) or one on each carbon (1,2-dideuteroethane, DH2C-CDH2); as if the substituent was chlorine instead of deuterium. The two molecules do not interconvert easily and have different properties, such as their microwave spectrum. Another example would be substituting one atom of deuterium for one of the hydrogens in chlorofluoromethane (CH2ClF). While the original molecule is not chiral and has a single isomer, the substitution creates a pair of chiral enantiomers of CHDClF, which could be distinguished (at least in theory) by their optical activity. When two isomers would be identical if all isotopes of each element were replaced by a single isotope, they are described as isotopomers or isotopic isomers. In the above two examples if all D were replaced by H, the two dideuteroethanes would both become ethane and the two deuterochlorofluoromethanes would both become CH2ClF. The concept of isotopomers is different from isotopologs or isotopic homologs, which differ in their isotopic composition. For example, C2H5D and C2H4D2 are isotopologues and not isotopomers, and are therefore not isomers of each other. Spin isomers Another type of isomerism based on nuclear properties is spin isomerism, where molecules differ only in the relative spin magnetic quantum numbers ms of the constituent atomic nuclei. This phenomenon is significant for molecular hydrogen, which can be partially separated into two long-lived states described as spin isomers or nuclear spin isomers: parahydrogen, with the spins of the two nuclei pointing in opposite directions, and orthohydrogen, where the spins point in the same direction. Isomerization Isomerization is the process by which one molecule is transformed into another molecule that has exactly the same atoms, but the atoms are rearranged. In some molecules and under some conditions, isomerization occurs spontaneously. Many isomers are equal or roughly equal in bond energy, and so exist in roughly equal amounts, provided that they can interconvert relatively freely, that is the energy barrier between the two isomers is not too high. When the isomerization occurs intramolecularly, it is considered a rearrangement reaction. An example of an organometallic isomerization is the production of decaphenylferrocene, [(η5-C5Ph5)2Fe] from its linkage isomer. Synthesis of fumaric acid Industrial synthesis of fumaric acid proceeds via the cis-trans isomerization of maleic acid: Topoisomerases are enzymes that can cut and reform circular DNA and thus change its topology. Medicinal chemistry Isomers having distinct biological properties are common; for example, the placement of methyl groups. In substituted xanthines, theobromine, found in chocolate, is a vasodilator with some effects in common with caffeine; but, if one of the two methyl groups is moved to a different position on the two-ring core, the isomer is theophylline, which has a variety of effects, including bronchodilation and anti-inflammatory action. Another example of this occurs in the phenethylamine-based stimulant drugs. Phentermine is a non-chiral compound with a weaker effect than that of amphetamine. It is used as an appetite-reducing medication and has mild or no stimulant properties. However, an alternate atomic arrangement gives dextromethamphetamine, which is a stronger stimulant than amphetamine. In medicinal chemistry and biochemistry, enantiomers are a special concern because they may possess distinct biological activity. Many preparative procedures afford a mixture of equal amounts of both enantiomeric forms. In some cases, the enantiomers are separated by chromatography using chiral stationary phases. They may also be separated through the formation of diastereomeric salts. In other cases, enantioselective synthesis have been developed. As an inorganic example, cisplatin (see structure above) is an important drug used in cancer chemotherapy, whereas the trans isomer (transplatin) has no useful pharmacological activity. History Isomerism was first observed in 1827, when Friedrich Wöhler prepared silver cyanate and discovered that, although its elemental composition of AgCNO was identical to silver fulminate (prepared by Justus von Liebig the previous year), its properties were distinct. This finding challenged the prevailing chemical understanding of the time, which held that chemical compounds could be distinct only when their elemental compositions differ. (We now know that the bonding structures of fulminate and cyanate can be approximately described as O- N+≡C- and O=C=N-, respectively.) Additional examples were found in succeeding years, such as Wöhler's 1828 discovery that urea has the same atomic composition (CH4N2O) as the chemically distinct ammonium cyanate. (Their structures are now known to be (H2N-)2C=O and [NH+4] [O=C=N^ -] , respectively.) In 1830 Jöns Jacob Berzelius introduced the term isomerism to describe the phenomenon. In 1848, Louis Pasteur observed that tartaric acid crystals came into two kinds of shapes that were mirror images of each other. Separating the crystals by hand, he obtained two version of tartaric acid, each of which would crystallize in only one of the two shapes, and rotated the plane of polarized light to the same degree but in opposite directions. In 1860, Pasteur explicitly hypothesized that the molecules of isomers might have the same composition but different arrangements of their atoms. See also Allotropy (of elements) Chirality (chemistry) Cis-trans isomerism Cyclohexane conformation Descriptor (chemistry) Electromerism Isomery (botany) Ligand isomerism Nuclear isomer Stereocenter Structural isomerism Tautomer Vitamer References External links 1827 introductions ga:Isiméir
0.772502
0.997519
0.770585
Semantics
Semantics is the study of linguistic meaning. It examines what meaning is, how words get their meaning, and how the meaning of a complex expression depends on its parts. Part of this process involves the distinction between sense and reference. Sense is given by the ideas and concepts associated with an expression while reference is the object to which an expression points. Semantics contrasts with syntax, which studies the rules that dictate how to create grammatically correct sentences, and pragmatics, which investigates how people use language in communication. Lexical semantics is the branch of semantics that studies word meaning. It examines whether words have one or several meanings and in what lexical relations they stand to one another. Phrasal semantics studies the meaning of sentences by exploring the phenomenon of compositionality or how new meanings can be created by arranging words. Formal semantics relies on logic and mathematics to provide precise frameworks of the relation between language and meaning. Cognitive semantics examines meaning from a psychological perspective and assumes a close relation between language ability and the conceptual structures used to understand the world. Other branches of semantics include conceptual semantics, computational semantics, and cultural semantics. Theories of meaning are general explanations of the nature of meaning and how expressions are endowed with it. According to referential theories, the meaning of an expression is the part of reality to which it points. Ideational theories identify meaning with mental states like the ideas that an expression evokes in the minds of language users. According to causal theories, meaning is determined by causes and effects, which behaviorist semantics analyzes in terms of stimulus and response. Further theories of meaning include truth-conditional semantics, verificationist theories, the use theory, and inferentialist semantics. The study of semantic phenomena began during antiquity but was not recognized as an independent field of inquiry until the 19th century. Semantics is relevant to the fields of formal logic, computer science, and psychology. Definition and related fields Semantics is the study of meaning in languages. It is a systematic inquiry that examines what linguistic meaning is and how it arises. It investigates how expressions are built up from different layers of constituents, like morphemes, words, clauses, sentences, and texts, and how the meanings of the constituents affect one another. Semantics can focus on a specific language, like English, but in its widest sense, it investigates meaning structures relevant to all languages. As a descriptive discipline, it aims to determine how meaning works without prescribing what meaning people should associate with particular expressions. Some of its key questions are "How do the meanings of words combine to create the meanings of sentences?", "How do meanings relate to the minds of language users, and to the things words refer to?", and "What is the connection between what a word means, and the contexts in which it is used?". The main disciplines engaged in semantics are linguistics, semiotics, and philosophy. Besides its meaning as a field of inquiry, semantics can also refer to theories within this field, like truth-conditional semantics, and to the meaning of particular expressions, like the semantics of the word fairy. As a field of inquiry, semantics has both an internal and an external side. The internal side is interested in the connection between words and the mental phenomena they evoke, like ideas and conceptual representations. The external side examines how words refer to objects in the world and under what conditions a sentence is true. Many related disciplines investigate language and meaning. Semantics contrasts with other subfields of linguistics focused on distinct aspects of language. Phonology studies the different types of sounds used in languages and how sounds are connected to form words while syntax examines the rules that dictate how to arrange words to create sentences. These divisions are reflected in the fact that it is possible to master some aspects of a language while lacking others, like when a person knows how to pronounce a word without knowing its meaning. As a subfield of semiotics, semantics has a more narrow focus on meaning in language while semiotics studies both linguistic and non-linguistic signs. Semiotics investigates additional topics like the meaning of non-verbal communication, conventional symbols, and natural signs independent of human interaction. Examples include nodding to signal agreement, stripes on a uniform signifying rank, and the presence of vultures indicating a nearby animal carcass. Semantics further contrasts with pragmatics, which is interested in how people use language in communication. An expression like "That's what I'm talking about" can mean many things depending on who says it and in what situation. Semantics is interested in the possible meanings of expressions: what they can and cannot mean in general. In this regard, it is sometimes defined as the study of context-independent meaning. Pragmatics examines which of these possible meanings is relevant in a particular case. In contrast to semantics, it is interested in actual performance rather than in the general linguistic competence underlying this performance. This includes the topic of additional meaning that can be inferred even though it is not literally expressed, like what it means if a speaker remains silent on a certain topic. A closely related distinction by the semiotician Charles W. Morris holds that semantics studies the relation between words and the world, pragmatics examines the relation between words and users, and syntax focuses on the relation between different words. Semantics is related to etymology, which studies how words and their meanings changed in the course of history. Another connected field is hermeneutics, which is the art or science of interpretation and is concerned with the right methodology of interpreting text in general and scripture in particular. Metasemantics examines the metaphysical foundations of meaning and aims to explain where it comes from or how it arises. The word semantics originated from the Ancient Greek adjective , meaning 'relating to signs', which is a derivative of , the noun for 'sign'. It was initially used for medical symptoms and only later acquired its wider meaning regarding any type of sign, including linguistic signs. The word semantics entered the English language from the French term , which the linguist Michel Bréal first introduced at the end of the 19th century. Basic concepts Meaning Semantics studies meaning in language, which is limited to the meaning of linguistic expressions. It concerns how signs are interpreted and what information they contain. An example is the meaning of words provided in dictionary definitions by giving synonymous expressions or paraphrases, like defining the meaning of the term ram as adult male sheep. There are many forms of non-linguistic meaning that are not examined by semantics. Actions and policies can have meaning in relation to the goal they serve. Fields like religion and spirituality are interested in the meaning of life, which is about finding a purpose in life or the significance of existence in general. Linguistic meaning can be analyzed on different levels. Word meaning is studied by lexical semantics and investigates the denotation of individual words. It is often related to concepts of entities, like how the word dog is associated with the concept of the four-legged domestic animal. Sentence meaning falls into the field of phrasal semantics and concerns the denotation of full sentences. It usually expresses a concept applying to a type of situation, as in the sentence "the dog has ruined my blue skirt". The meaning of a sentence is often referred to as a proposition. Different sentences can express the same proposition, like the English sentence "the tree is green" and the German sentence . Utterance meaning is studied by pragmatics and is about the meaning of an expression on a particular occasion. Sentence meaning and utterance meaning come apart in cases where expressions are used in a non-literal way, as is often the case with irony. Semantics is primarily interested in the public meaning that expressions have, like the meaning found in general dictionary definitions. Speaker meaning, by contrast, is the private or subjective meaning that individuals associate with expressions. It can diverge from the literal meaning, like when a person associates the word needle with pain or drugs. Sense and reference Meaning is often analyzed in terms of sense and reference, also referred to as intension and extension or connotation and denotation. The referent of an expression is the object to which the expression points. The sense of an expression is the way in which it refers to that object or how the object is interpreted. For example, the expressions morning star and evening star refer to the same planet, just like the expressions 2 + 2 and 3 + 1 refer to the same number. The meanings of these expressions differ not on the level of reference but on the level of sense. Sense is sometimes understood as a mental phenomenon that helps people identify the objects to which an expression refers. Some semanticists focus primarily on sense or primarily on reference in their analysis of meaning. To grasp the full meaning of an expression, it is usually necessary to understand both to what entities in the world it refers and how it describes them. The distinction between sense and reference can explain identity statements, which can be used to show how two expressions with a different sense have the same referent. For instance, the sentence "the morning star is the evening star" is informative and people can learn something from it. The sentence "the morning star is the morning star", by contrast, is an uninformative tautology since the expressions are identical not only on the level of reference but also on the level of sense. Compositionality Compositionality is a key aspect of how languages construct meaning. It is the idea that the meaning of a complex expression is a function of the meanings of its parts. It is possible to understand the meaning of the sentence "Zuzana owns a dog" by understanding what the words Zuzana, owns, a and dog mean and how they are combined. In this regard, the meaning of complex expressions like sentences is different from word meaning since it is normally not possible to deduce what a word means by looking at its letters and one needs to consult a dictionary instead. Compositionality is often used to explain how people can formulate and understand an almost infinite number of meanings even though the amount of words and cognitive resources is finite. Many sentences that people read are sentences that they have never seen before and they are nonetheless able to understand them. When interpreted in a strong sense, the principle of compositionality states that the meaning of a complex expression is not just affected by its parts and how they are combined but fully determined this way. It is controversial whether this claim is correct or whether additional aspects influence meaning. For example, context may affect the meaning of expressions; idioms like "kick the bucket" carry figurative or non-literal meanings that are not directly reducible to the meanings of their parts. Truth and truth conditions Truth is a property of statements that accurately present the world and true statements are in accord with reality. Whether a statement is true usually depends on the relation between the statement and the rest of the world. The truth conditions of a statement are the way the world needs to be for the statement to be true. For example, it belongs to the truth conditions of the sentence "it is raining outside" that raindrops are falling from the sky. The sentence is true if it is used in a situation in which the truth conditions are fulfilled, i.e., if there is actually rain outside. Truth conditions play a central role in semantics and some theories rely exclusively on truth conditions to analyze meaning. To understand a statement usually implies that one has an idea about the conditions under which it would be true. This can happen even if one does not know whether the conditions are fulfilled. Semiotic triangle The semiotic triangle, also called the triangle of meaning, is a model used to explain the relation between language, language users, and the world, represented in the model as Symbol, Thought or Reference, and Referent. The symbol is a linguistic signifier, either in its spoken or written form. The central idea of the model is that there is no direct relation between a linguistic expression and what it refers to, as was assumed by earlier dyadic models. This is expressed in the diagram by the dotted line between symbol and referent. The model holds instead that the relation between the two is mediated through a third component. For example, the term apple stands for a type of fruit but there is no direct connection between this string of letters and the corresponding physical object. The relation is only established indirectly through the mind of the language user. When they see the symbol, it evokes a mental image or a concept, which establishes the connection to the physical object. This process is only possible if the language user learned the meaning of the symbol before. The meaning of a specific symbol is governed by the conventions of a particular language. The same symbol may refer to one object in one language, to another object in a different language, and to no object in another language. Others Many other concepts are used to describe semantic phenomena. The semantic role of an expression is the function it fulfills in a sentence. In the sentence "the boy kicked the ball", the boy has the role of the agent who performs an action. The ball is the theme or patient of this action as something that does not act itself but is involved in or affected by the action. The same entity can be both agent and patient, like when someone cuts themselves. An entity has the semantic role of an instrument if it is used to perform the action, for instance, when cutting something with a knife then the knife is the instrument. For some sentences, no action is described but an experience takes place, like when a girl sees a bird. In this case, the girl has the role of the experiencer. Other common semantic roles are location, source, goal, beneficiary, and stimulus. Lexical relations describe how words stand to one another. Two words are synonyms if they share the same or a very similar meaning, like car and automobile or buy and purchase. Antonyms have opposite meanings, such as the contrast between alive and dead or fast and slow. One term is a hyponym of another term if the meaning of the first term is included in the meaning of the second term. For example, ant is a hyponym of insect. A prototype is a hyponym that has characteristic features of the type it belongs to. A robin is a prototype of a bird but a penguin is not. Two words with the same pronunciation are homophones like flour and flower, while two words with the same spelling are homonyms, like a bank of a river in contrast to a bank as a financial institution. Hyponymy is closely related to meronymy, which describes the relation between part and whole. For instance, wheel is a meronym of car. An expression is ambiguous if it has more than one possible meaning. In some cases, it is possible to disambiguate them to discern the intended meaning. The term polysemy is used if the different meanings are closely related to one another, like the meanings of the word head, which can refer to the topmost part of the human body or the top-ranking person in an organization. The meaning of words can often be subdivided into meaning components called semantic features. The word horse has the semantic feature animate but lacks the semantic feature human. It may not always be possible to fully reconstruct the meaning of a word by identifying all its semantic features. A semantic or lexical field is a group of words that are all related to the same activity or subject. For instance, the semantic field of cooking includes words like bake, boil, spice, and pan. The context of an expression refers to the situation or circumstances in which it is used and includes time, location, speaker, and audience. It also encompasses other passages in a text that come before and after it. Context affects the meaning of various expressions, like the deictic expression here and the anaphoric expression she. A syntactic environment is extensional or transparent if it is always possible to exchange expressions with the same reference without affecting the truth value of the sentence. For example, the environment of the sentence "the number 8 is even" is extensional because replacing the expression the number 8 with the number of planets in the solar system does not change its truth value. For intensional or opaque contexts, this type of substitution is not always possible. For instance, the embedded clause in "Paco believes that the number 8 is even" is intensional since Paco may not know that the number of planets in the solar system is 8. Semanticists commonly distinguish the language they study, called object language, from the language they use to express their findings, called metalanguage. When a professor uses Japanese to teach their student how to interpret the language of first-order logic then the language of first-order logic is the object language and Japanese is the metalanguage. The same language may occupy the role of object language and metalanguage at the same time. This is the case in monolingual English dictionaries, in which both the entry term belonging to the object language and the definition text belonging to the metalanguage are taken from the English language. Branches Lexical semantics Lexical semantics is the sub-field of semantics that studies word meaning. It examines semantic aspects of individual words and the vocabulary as a whole. This includes the study of lexical relations between words, such as whether two terms are synonyms or antonyms. Lexical semantics categorizes words based on semantic features they share and groups them into semantic fields unified by a common subject. This information is used to create taxonomies to organize lexical knowledge, for example, by distinguishing between physical and abstract entities and subdividing physical entities into stuff and individuated entities. Further topics of interest are polysemy, ambiguity, and vagueness. Lexical semantics is sometimes divided into two complementary approaches: semasiology and onomasiology. Semasiology starts from words and examines what their meaning is. It is interested in whether words have one or several meanings and how those meanings are related to one another. Instead of going from word to meaning, onomasiology goes from meaning to word. It starts with a concept and examines what names this concept has or how it can be expressed in a particular language. Some semanticists also include the study of lexical units other than words in the field of lexical semantics. Compound expressions like being under the weather have a non-literal meaning that acts as a unit and is not a direct function of its parts. Another topic concerns the meaning of morphemes that make up words, for instance, how negative prefixes like in- and dis- affect the meaning of the words they are part of, as in inanimate and dishonest. Phrasal semantics Phrasal semantics studies the meaning of sentences. It relies on the principle of compositionality to explore how the meaning of complex expressions arises from the combination of their parts. The different parts can be analyzed as subject, predicate, or argument. The subject of a sentence usually refers to a specific entity while the predicate describes a feature of the subject or an event in which the subject participates. Arguments provide additional information to complete the predicate. For example, in the sentence "Mary hit the ball", Mary is the subject, hit is the predicate, and the ball is an argument. A more fine-grained categorization distinguishes between different semantic roles of words, such as agent, patient, theme, location, source, and goal. Verbs usually function as predicates and often help to establish connections between different expressions to form a more complex meaning structure. In the expression "Beethoven likes Schubert", the verb like connects a liker to the object of their liking. Other sentence parts modify meaning rather than form new connections. For instance, the adjective red modifies the color of another entity in the expression red car. A further compositional device is variable binding, which is used to determine the reference of a term. For example, the last part of the expression "the woman who likes Beethoven" specifies which woman is meant. Parse trees can be used to show the underlying hierarchy employed to combine the different parts. Various grammatical devices, like the gerund form, also contribute to meaning and are studied by grammatical semantics. Formal semantics Formal semantics uses formal tools from logic and mathematics to analyze meaning in natural languages. It aims to develop precise logical formalisms to clarify the relation between expressions and their denotation. One of its key tasks is to provide frameworks of how language represents the world, for example, using ontological models to show how linguistic expressions map to the entities of that model. A common idea is that words refer to individual objects or groups of objects while sentences relate to events and states. Sentences are mapped to a truth value based on whether their description of the world is in correspondence with its ontological model. Formal semantics further examines how to use formal mechanisms to represent linguistic phenomena such as quantification, intensionality, noun phrases, plurals, mass terms, tense, and modality. Montague semantics is an early and influential theory in formal semantics that provides a detailed analysis of how the English language can be represented using mathematical logic. It relies on higher-order logic, lambda calculus, and type theory to show how meaning is created through the combination of expressions belonging to different syntactic categories. Dynamic semantics is a subfield of formal semantics that focuses on how information grows over time. According to it, "meaning is context change potential": the meaning of a sentence is not given by the information it contains but by the information change it brings about relative to a context. Cognitive semantics Cognitive semantics studies the problem of meaning from a psychological perspective or how the mind of the language user affects meaning. As a subdiscipline of cognitive linguistics, it sees language as a wide cognitive ability that is closely related to the conceptual structures used to understand and represent the world. Cognitive semanticists do not draw a sharp distinction between linguistic knowledge and knowledge of the world and see them instead as interrelated phenomena. They study how the interaction between language and human cognition affects the conceptual organization in very general domains like space, time, causation, and action. The contrast between profile and base is sometimes used to articulate the underlying knowledge structure. The profile of a linguistic expression is the aspect of the knowledge structure that it brings to the foreground while the base is the background that provides the context of this aspect without being at the center of attention. For example, the profile of the word hypotenuse is a straight line while the base is a right-angled triangle of which the hypotenuse forms a part. Cognitive semantics further compares the conceptual patterns and linguistic typologies across languages and considers to what extent the cognitive conceptual structures of humans are universal or relative to their linguistic background. Another research topic concerns the psychological processes involved in the application of grammar. Other investigated phenomena include categorization, which is understood as a cognitive heuristic to avoid information overload by regarding different entities in the same way, and embodiment, which concerns how the language user's bodily experience affects the meaning of expressions. Frame semantics is an important subfield of cognitive semantics. Its central idea is that the meaning of terms cannot be understood in isolation from each other but needs to be analyzed on the background of the conceptual structures they depend on. These structures are made explicit in terms of semantic frames. For example, words like bride, groom, and honeymoon evoke in the mind the frame of marriage. Others Conceptual semantics shares with cognitive semantics the idea of studying linguistic meaning from a psychological perspective by examining how humans conceptualize and experience the world. It holds that meaning is not about the objects to which expressions refer but about the cognitive structure of human concepts that connect thought, perception, and action. Conceptual semantics differs from cognitive semantics by introducing a strict distinction between meaning and syntax and by relying on various formal devices to explore the relation between meaning and cognition. Computational semantics examines how the meaning of natural language expressions can be represented and processed on computers. It often relies on the insights of formal semantics and applies them to problems that can be computationally solved. Some of its key problems include computing the meaning of complex expressions by analyzing their parts, handling ambiguity, vagueness, and context-dependence, and using the extracted information in automatic reasoning. It forms part of computational linguistics, artificial intelligence, and cognitive science. Its applications include machine learning and machine translation. Cultural semantics studies the relation between linguistic meaning and culture. It compares conceptual structures in different languages and is interested in how meanings evolve and change because of cultural phenomena associated with politics, religion, and customs. For example, address practices encode cultural values and social hierarchies, as in the difference of politeness of expressions like and in Spanish or and in German in contrast to English, which lacks these distinctions and uses the pronoun you in either case. Closely related fields are intercultural semantics, cross-cultural semantics, and comparative semantics. Pragmatic semantics studies how the meaning of an expression is shaped by the situation in which it is used. It is based on the idea that communicative meaning is usually context-sensitive and depends on who participates in the exchange, what information they share, and what their intentions and background assumptions are. It focuses on communicative actions, of which linguistic expressions only form one part. Some theorists include these topics within the scope of semantics while others consider them part of the distinct discipline of pragmatics. Theories of meaning Theories of meaning explain what meaning is, what meaning an expression has, and how the relation between expression and meaning is established. Referential Referential theories state that the meaning of an expression is the entity to which it points. The meaning of singular terms like names is the individual to which they refer. For example, the meaning of the name George Washington is the person with this name. General terms refer not to a single entity but to the set of objects to which this term applies. In this regard, the meaning of the term cat is the set of all cats. Similarly, verbs usually refer to classes of actions or events and adjectives refer to properties of individuals and events. Simple referential theories face problems for meaningful expressions that have no clear referent. Names like Pegasus and Santa Claus have meaning even though they do not point to existing entities. Other difficulties concern cases in which different expressions are about the same entity. For instance, the expressions Roger Bannister and the first man to run a four-minute mile refer to the same person but do not mean exactly the same thing. This is particularly relevant when talking about beliefs since a person may understand both expressions without knowing that they point to the same entity. A further problem is given by expressions whose meaning depends on the context, like the deictic terms here and I. To avoid these problems, referential theories often introduce additional devices. Some identify meaning not directly with objects but with functions that point to objects. This additional level has the advantage of taking the context of an expression into account since the same expression may point to one object in one context and to another object in a different context. For example, the reference of the word here depends on the location in which it is used. A closely related approach is possible world semantics, which allows expressions to refer not only to entities in the actual world but also to entities in other possible worlds. According to this view, expressions like the first man to run a four-minute mile refer to different persons in different worlds. This view can also be used to analyze sentences that talk about what is possible or what is necessary: possibility is what is true in some possible worlds while necessity is what is true in all possible worlds. Ideational Ideational theories, also called mentalist theories, are not primarily interested in the reference of expressions and instead explain meaning in terms of the mental states of language users. One historically influential approach articulated by John Locke holds that expressions stand for ideas in the speaker's mind. According to this view, the meaning of the word dog is the idea that people have of dogs. Language is seen as a medium used to transfer ideas from the speaker to the audience. After having learned the same meaning of signs, the speaker can produce a sign that corresponds to the idea in their mind and the perception of this sign evokes the same idea in the mind of the audience. A closely related theory focuses not directly on ideas but on intentions. This view is particularly associated with Paul Grice, who observed that people usually communicate to cause some reaction in their audience. He held that the meaning of an expression is given by the intended reaction. This means that communication is not just about decoding what the speaker literally said but requires an understanding of their intention or why they said it. For example, telling someone looking for petrol that "there is a garage around the corner" has the meaning that petrol can be obtained there because of the speaker's intention to help. This goes beyond the literal meaning, which has no explicit connection to petrol. Causal Causal theories hold that the meaning of an expression depends on the causes and effects it has. According to behaviorist semantics, also referred to as stimulus-response theory, the meaning of an expression is given by the situation that prompts the speaker to use it and the response it provokes in the audience. For instance, the meaning of yelling "Fire!" is given by the presence of an uncontrolled fire and attempts to control it or seek safety. Behaviorist semantics relies on the idea that learning a language consists in adopting behavioral patterns in the form of stimulus-response pairs. One of its key motivations is to avoid private mental entities and define meaning instead in terms of publicly observable language behavior. Another causal theory focuses on the meaning of names and holds that a naming event is required to establish the link between name and named entity. This naming event acts as a form of baptism that establishes the first link of a causal chain in which all subsequent uses of the name participate. According to this view, the name Plato refers to an ancient Greek philosopher because, at some point, he was originally named this way and people kept using this name to refer to him. This view was originally formulated by Saul Kripke to apply to names only but has been extended to cover other types of speech as well. Others Truth-conditional semantics analyzes the meaning of sentences in terms of their truth conditions. According to this view, to understand a sentence means to know what the world needs to be like for the sentence to be true. Truth conditions can themselves be expressed through possible worlds. For example, the sentence "Hillary Clinton won the 2016 American presidential election" is false in the actual world but there are some possible worlds in which it is true. The extension of a sentence can be interpreted as its truth value while its intension is the set of all possible worlds in which it is true. Truth-conditional semantics is closely related to verificationist theories, which introduce the additional idea that there should be some kind of verification procedure to assess whether a sentence is true. They state that the meaning of a sentence consists in the method to verify it or in the circumstances that justify it. For instance, scientific claims often make predictions, which can be used to confirm or disconfirm them using observation. According to verificationism, sentences that can neither be verified nor falsified are meaningless. The use theory states that the meaning of an expression is given by the way it is utilized. This view was first introduced by Ludwig Wittgenstein, who understood language as a collection of language games. The meaning of expressions depends on how they are used inside a game and the same expression may have different meanings in different games. Some versions of this theory identify meaning directly with patterns of regular use. Others focus on social norms and conventions by additionally taking into account whether a certain use is considered appropriate in a given society. Inferentialist semantics, also called conceptual role semantics, holds that the meaning of an expression is given by the role it plays in the premises and conclusions of good inferences. For example, one can infer from "x is a male sibling" that "x is a brother" and one can infer from "x is a brother" that "x has parents". According to inferentialist semantics, the meaning of the word brother is determined by these and all similar inferences that can be drawn. History Semantics was established as an independent field of inquiry in the 19th century but the study of semantic phenomena began as early as the ancient period as part of philosophy and logic. In ancient Greece, Plato (427–347 BCE) explored the relation between names and things in his dialogue Cratylus. It considers the positions of naturalism, which holds that things have their name by nature, and conventionalism, which states that names are related to their referents by customs and conventions among language users. The book On Interpretation by Aristotle (384–322 BCE) introduced various conceptual distinctions that greatly influenced subsequent works in semantics. He developed an early form of the semantic triangle by holding that spoken and written words evoke mental concepts, which refer to external things by resembling them. For him, mental concepts are the same for all humans, unlike the conventional words they associate with those concepts. The Stoics incorporated many of the insights of their predecessors to develop a complex theory of language through the perspective of logic. They discerned different kinds of words by their semantic and syntactic roles, such as the contrast between names, common nouns, and verbs. They also discussed the difference between statements, commands, and prohibitions. In ancient India, the orthodox school of Nyaya held that all names refer to real objects. It explored how words lead to an understanding of the thing meant and what consequence this relation has to the creation of knowledge. Philosophers of the orthodox school of Mīmāṃsā discussed the relation between the meanings of individual words and full sentences while considering which one is more basic. The book Vākyapadīya by Bhartṛhari (4th–5th century CE) distinguished between different types of words and considered how they can carry different meanings depending on how they are used. In ancient China, the Mohists argued that names play a key role in making distinctions to guide moral behavior. They inspired the School of Names, which explored the relation between names and entities while examining how names are required to identify and judge entities. In the Middle Ages, Augustine of Hippo (354–430) developed a general conception of signs as entities that stand for other entities and convey them to the intellect. He was the first to introduce the distinction between natural and linguistic signs as different types belonging to a common genus. Boethius (480–528) wrote a translation of and various comments on Aristotle's book On Interpretation, which popularized its main ideas and inspired reflections on semantic phenomena in the scholastic tradition. An innovation in the semantics of Peter Abelard (1079–1142) was his interest in propositions or the meaning of sentences in contrast to the focus on the meaning of individual words by many of his predecessors. He further explored the nature of universals, which he understood as mere semantic phenomena of common names caused by mental abstractions that do not refer to any entities. In the Arabic tradition, Ibn Faris (920–1004) identified meaning with the intention of the speaker while Abu Mansur al-Azhari (895–980) held that meaning resides directly in speech and needs to be extracted through interpretation. An important topic towards the end of the Middle Ages was the distinction between categorematic and syncategorematic terms. Categorematic terms have an independent meaning and refer to some part of reality, like horse and Socrates. Syncategorematic terms lack independent meaning and fulfill other semantic functions, such as modifying or quantifying the meaning of other expressions, like the words some, not, and necessarily. An early version of the causal theory of meaning was proposed by Roger Bacon (c. 1219/20 – c. 1292), who held that things get names similar to how people get names through some kind of initial baptism. His ideas inspired the tradition of the speculative grammarians, who proposed that there are certain universal structures found in all languages. They arrived at this conclusion by drawing an analogy between the modes of signification on the level of language, the modes of understanding on the level of mind, and the modes of being on the level of reality. In the early modern period, Thomas Hobbes (1588–1679) distinguished between marks, which people use privately to recall their own thoughts, and signs, which are used publicly to communicate their ideas to others. In their Port-Royal Logic, Antoine Arnauld (1612–1694) and Pierre Nicole (1625–1695) developed an early precursor of the distinction between intension and extension. The Essay Concerning Human Understanding by John Locke (1632–1704) presented an influential version of the ideational theory of meaning, according to which words stand for ideas and help people communicate by transferring ideas from one mind to another. Gottfried Wilhelm Leibniz (1646–1716) understood language as the mirror of thought and tried to conceive the outlines of a universal formal language to express scientific and philosophical truths. This attempt inspired theorists Christian Wolff (1679–1754), Georg Bernhard Bilfinger (1693–1750), and Johann Heinrich Lambert (1728–1777) to develop the idea of a general science of sign systems. Étienne Bonnot de Condillac (1715–1780) accepted and further developed Leibniz's idea of the linguistic nature of thought. Against Locke, he held that language is involved in the creation of ideas and is not merely a medium to communicate them. In the 19th century, semantics emerged and solidified as an independent field of inquiry. Christian Karl Reisig (1792–1829) is sometimes credited as the father of semantics since he clarified its concept and scope while also making various contributions to its key ideas. Michel Bréal (1832–1915) followed him in providing a broad conception of the field, for which he coined the French term . John Stuart Mill (1806–1873) gave great importance to the role of names to refer to things. He distinguished between the connotation and denotation of names and held that propositions are formed by combining names. Charles Sanders Peirce (1839–1914) conceived semiotics as a general theory of signs with several subdisciplines, which were later identified by Charles W. Morris (1901–1979) as syntactics, semantics, and pragmatics. In his pragmatist approach to semantics, Peirce held that the meaning of conceptions consists in the entirety of their practical consequences. The philosophy of Gottlob Frege (1848–1925) contributed to semantics on many different levels. Frege first introduced the distinction between sense and reference, and his development of predicate logic and the principle of compositionality formed the foundation of many subsequent developments in formal semantics. Edmund Husserl (1859–1938) explored meaning from a phenomenological perspective by considering the mental acts that endow expressions with meaning. He held that meaning always implies reference to an object and expressions that lack a referent, like green is or, are meaningless. In the 20th century, Alfred Tarski (1901–1983) defined truth in formal languages through his semantic theory of truth, which was influential in the development of truth-conditional semantics by Donald Davidson (1917–2003). Tarski's student Richard Montague (1930–1971) formulated a complex formal framework of the semantics of the English language, which was responsible for establishing formal semantics as a major area of research. According to structural semantics, which was inspired by the structuralist philosophy of Ferdinand de Saussure (1857–1913), language is a complex network of structural relations and the meanings of words are not fixed individually but depend on their position within this network. The theory of general semantics was developed by Alfred Korzybski (1879–1950) as an inquiry into how language represents reality and affects human thought. The contributions of George Lakoff (1941–present) and Ronald Langacker (1942–present) provided the foundation of cognitive semantics. Charles J. Fillmore (1929–2014) developed frame semantics as a major approach in this area. The closely related field of conceptual semantics was inaugurated by Ray Jackendoff (1945–present). In various disciplines Logic Logicians study correct reasoning and often develop formal languages to express arguments and assess their correctness. One part of this process is to provide a semantics for a formal language to precisely define what its terms mean. A semantics of a formal language is a set of rules, usually expressed as a mathematical function, that assigns meanings to formal language expressions. For example, the language of first-order logic uses lowercase letters for individual constants and uppercase letters for predicates. To express the sentence "Bertie is a dog", the formula can be used where is an individual constant for Bertie and is a predicate for dog. Classical model-theoretic semantics assigns meaning to these terms by defining an interpretation function that maps individual constants to specific objects and predicates to sets of objects or tuples. The function maps to Bertie and to the set of all dogs. This way, it is possible to calculate the truth value of the sentence: it is true if Bertie is a member of the set of dogs and false otherwise. Formal logic aims to determine whether arguments are deductively valid, that is, whether the premises entail the conclusion. Entailment can be defined in terms of syntax or in terms of semantics. Syntactic entailment, expressed with the symbol , relies on rules of inference, which can be understood as procedures to transform premises and arrive at a conclusion. These procedures only take the logical form of the premises on the level of syntax into account and ignore what meaning they express. Semantic entailment, expressed with the symbol , looks at the meaning of the premises, in particular, at their truth value. A conclusion follows semantically from a set of premises if the truth of the premises ensures the truth of the conclusion, that is, if any semantic interpretation function that assigns the premises the value true also assigns the conclusion the value true. Computer science In computer science, the semantics of a program is how it behaves when a computer runs it. Semantics contrasts with syntax, which is the particular form in which instructions are expressed. The same behavior can usually be described with different forms of syntax. In JavaScript, this is the case for the commands i += 1 and i = i + 1, which are syntactically different expressions to increase the value of the variable i by one. This difference is also reflected in different programming languages since they rely on different syntax but can usually be employed to create programs with the same behavior on the semantic level. Static semantics focuses on semantic aspects that affect the compilation of a program. In particular, it is concerned with detecting errors of syntactically correct programs, such as type errors, which arise when an operation receives an incompatible data type. This is the case, for instance, if a function performing a numerical calculation is given a string instead of a number as an argument. Dynamic semantics focuses on the run time behavior of programs, that is, what happens during the execution of instructions. The main approaches to dynamic semantics are denotational, axiomatic, and operational semantics. Denotational semantics relies on mathematical formalisms to describe the effects of each element of the code. Axiomatic semantics uses deductive logic to analyze which conditions must be in place before and after the execution of a program. Operational semantics interprets the execution of a program as a series of steps, each involving the transition from one state to another state. Psychology Psychological semantics examines psychological aspects of meaning. It is concerned with how meaning is represented on a cognitive level and what mental processes are involved in understanding and producing language. It further investigates how meaning interacts with other mental processes, such as the relation between language and perceptual experience. Other issues concern how people learn new words and relate them to familiar things and concepts, how they infer the meaning of compound expressions they have never heard before, how they resolve ambiguous expressions, and how semantic illusions lead them to misinterpret sentences. One key topic is semantic memory, which is a form of general knowledge of meaning that includes the knowledge of language, concepts, and facts. It contrasts with episodic memory, which records events that a person experienced in their life. The comprehension of language relies on semantic memory and the information it carries about word meanings. According to a common view, word meanings are stored and processed in relation to their semantic features. The feature comparison model states that sentences like "a robin is a bird" are assessed on a psychological level by comparing the semantic features of the word robin with the semantic features of the word bird. The assessment process is fast if their semantic features are similar, which is the case if the example is a prototype of the general category. For atypical examples, as in the sentence "a penguin is a bird", there is less overlap in the semantic features and the psychological process is significantly slower. See also Contronym References Notes Citations Sources External links Concepts in logic Grammar + Meaning (philosophy of language)
0.771416
0.998911
0.770576
Thermodynamic equilibrium
Thermodynamic equilibrium is an axiomatic concept of thermodynamics. It is an internal state of a single thermodynamic system, or a relation between several thermodynamic systems connected by more or less permeable or impermeable walls. In thermodynamic equilibrium, there are no net macroscopic flows of matter nor of energy within a system or between systems. In a system that is in its own state of internal thermodynamic equilibrium, not only is there an absence of macroscopic change, but there is an “absence of any tendency toward change on a macroscopic scale.” Systems in mutual thermodynamic equilibrium are simultaneously in mutual thermal, mechanical, chemical, and radiative equilibria. Systems can be in one kind of mutual equilibrium, while not in others. In thermodynamic equilibrium, all kinds of equilibrium hold at once and indefinitely, until disturbed by a thermodynamic operation. In a macroscopic equilibrium, perfectly or almost perfectly balanced microscopic exchanges occur; this is the physical explanation of the notion of macroscopic equilibrium. A thermodynamic system in a state of internal thermodynamic equilibrium has a spatially uniform temperature. Its intensive properties, other than temperature, may be driven to spatial inhomogeneity by an unchanging long-range force field imposed on it by its surroundings. In systems that are at a state of non-equilibrium there are, by contrast, net flows of matter or energy. If such changes can be triggered to occur in a system in which they are not already occurring, the system is said to be in a "meta-stable equilibrium". Though not a widely named "law," it is an axiom of thermodynamics that there exist states of thermodynamic equilibrium. The second law of thermodynamics states that when an isolated body of material starts from an equilibrium state, in which portions of it are held at different states by more or less permeable or impermeable partitions, and a thermodynamic operation removes or makes the partitions more permeable, then it spontaneously reaches its own new state of internal thermodynamic equilibrium and this is accompanied by an increase in the sum of the entropies of the portions. Overview Classical thermodynamics deals with states of dynamic equilibrium. The state of a system at thermodynamic equilibrium is the one for which some thermodynamic potential is minimized (in the absence of an applied voltage), or for which the entropy (S) is maximized, for specified conditions. One such potential is the Helmholtz free energy (A), for a closed system at constant volume and temperature (controlled by a heat bath): Another potential, the Gibbs free energy (G), is minimized at thermodynamic equilibrium in a closed system at constant temperature and pressure, both controlled by the surroundings: where T denotes the absolute thermodynamic temperature, P the pressure, S the entropy, V the volume, and U the internal energy of the system. In other words, is a necessary condition for chemical equilibrium under these conditions (in the absence of an applied voltage). Thermodynamic equilibrium is the unique stable stationary state that is approached or eventually reached as the system interacts with its surroundings over a long time. The above-mentioned potentials are mathematically constructed to be the thermodynamic quantities that are minimized under the particular conditions in the specified surroundings. Conditions For a completely isolated system, S is maximum at thermodynamic equilibrium. For a closed system at controlled constant temperature and volume, A is minimum at thermodynamic equilibrium. For a closed system at controlled constant temperature and pressure without an applied voltage, G is minimum at thermodynamic equilibrium. The various types of equilibriums are achieved as follows: Two systems are in thermal equilibrium when their temperatures are the same. Two systems are in mechanical equilibrium when their pressures are the same. Two systems are in diffusive equilibrium when their chemical potentials are the same. All forces are balanced and there is no significant external driving force. Relation of exchange equilibrium between systems Often the surroundings of a thermodynamic system may also be regarded as another thermodynamic system. In this view, one may consider the system and its surroundings as two systems in mutual contact, with long-range forces also linking them. The enclosure of the system is the surface of contiguity or boundary between the two systems. In the thermodynamic formalism, that surface is regarded as having specific properties of permeability. For example, the surface of contiguity may be supposed to be permeable only to heat, allowing energy to transfer only as heat. Then the two systems are said to be in thermal equilibrium when the long-range forces are unchanging in time and the transfer of energy as heat between them has slowed and eventually stopped permanently; this is an example of a contact equilibrium. Other kinds of contact equilibrium are defined by other kinds of specific permeability. When two systems are in contact equilibrium with respect to a particular kind of permeability, they have common values of the intensive variable that belongs to that particular kind of permeability. Examples of such intensive variables are temperature, pressure, chemical potential. A contact equilibrium may be regarded also as an exchange equilibrium. There is a zero balance of rate of transfer of some quantity between the two systems in contact equilibrium. For example, for a wall permeable only to heat, the rates of diffusion of internal energy as heat between the two systems are equal and opposite. An adiabatic wall between the two systems is 'permeable' only to energy transferred as work; at mechanical equilibrium the rates of transfer of energy as work between them are equal and opposite. If the wall is a simple wall, then the rates of transfer of volume across it are also equal and opposite; and the pressures on either side of it are equal. If the adiabatic wall is more complicated, with a sort of leverage, having an area-ratio, then the pressures of the two systems in exchange equilibrium are in the inverse ratio of the volume exchange ratio; this keeps the zero balance of rates of transfer as work. A radiative exchange can occur between two otherwise separate systems. Radiative exchange equilibrium prevails when the two systems have the same temperature. The Thermodynamic state of internal equilibrium of a system A collection of matter may be entirely isolated from its surroundings. If it has been left undisturbed for an indefinitely long time, classical thermodynamics postulates that it is in a state in which no changes occur within it, and there are no flows within it. This is a thermodynamic state of internal equilibrium. (This postulate is sometimes, but not often, called the "minus first" law of thermodynamics. One textbook calls it the "zeroth law", remarking that the authors think this more befitting that title than its more customary definition, which apparently was suggested by Fowler.) Such states are a principal concern in what is known as classical or equilibrium thermodynamics, for they are the only states of the system that are regarded as well defined in that subject. A system in contact equilibrium with another system can by a thermodynamic operation be isolated, and upon the event of isolation, no change occurs in it. A system in a relation of contact equilibrium with another system may thus also be regarded as being in its own state of internal thermodynamic equilibrium. Multiple contact equilibrium The thermodynamic formalism allows that a system may have contact with several other systems at once, which may or may not also have mutual contact, the contacts having respectively different permeabilities. If these systems are all jointly isolated from the rest of the world those of them that are in contact then reach respective contact equilibria with one another. If several systems are free of adiabatic walls between each other, but are jointly isolated from the rest of the world, then they reach a state of multiple contact equilibrium, and they have a common temperature, a total internal energy, and a total entropy. Amongst intensive variables, this is a unique property of temperature. It holds even in the presence of long-range forces. (That is, there is no "force" that can maintain temperature discrepancies.) For example, in a system in thermodynamic equilibrium in a vertical gravitational field, the pressure on the top wall is less than that on the bottom wall, but the temperature is the same everywhere. A thermodynamic operation may occur as an event restricted to the walls that are within the surroundings, directly affecting neither the walls of contact of the system of interest with its surroundings, nor its interior, and occurring within a definitely limited time. For example, an immovable adiabatic wall may be placed or removed within the surroundings. Consequent upon such an operation restricted to the surroundings, the system may be for a time driven away from its own initial internal state of thermodynamic equilibrium. Then, according to the second law of thermodynamics, the whole undergoes changes and eventually reaches a new and final equilibrium with the surroundings. Following Planck, this consequent train of events is called a natural thermodynamic process. It is allowed in equilibrium thermodynamics just because the initial and final states are of thermodynamic equilibrium, even though during the process there is transient departure from thermodynamic equilibrium, when neither the system nor its surroundings are in well defined states of internal equilibrium. A natural process proceeds at a finite rate for the main part of its course. It is thereby radically different from a fictive quasi-static 'process' that proceeds infinitely slowly throughout its course, and is fictively 'reversible'. Classical thermodynamics allows that even though a process may take a very long time to settle to thermodynamic equilibrium, if the main part of its course is at a finite rate, then it is considered to be natural, and to be subject to the second law of thermodynamics, and thereby irreversible. Engineered machines and artificial devices and manipulations are permitted within the surroundings. The allowance of such operations and devices in the surroundings but not in the system is the reason why Kelvin in one of his statements of the second law of thermodynamics spoke of "inanimate" agency; a system in thermodynamic equilibrium is inanimate. Otherwise, a thermodynamic operation may directly affect a wall of the system. It is often convenient to suppose that some of the surrounding subsystems are so much larger than the system that the process can affect the intensive variables only of the surrounding subsystems, and they are then called reservoirs for relevant intensive variables. Local and global equilibrium It can be useful to distinguish between global and local thermodynamic equilibrium. In thermodynamics, exchanges within a system and between the system and the outside are controlled by intensive parameters. As an example, temperature controls heat exchanges. Global thermodynamic equilibrium (GTE) means that those intensive parameters are homogeneous throughout the whole system, while local thermodynamic equilibrium (LTE) means that those intensive parameters are varying in space and time, but are varying so slowly that, for any point, one can assume thermodynamic equilibrium in some neighborhood about that point. If the description of the system requires variations in the intensive parameters that are too large, the very assumptions upon which the definitions of these intensive parameters are based will break down, and the system will be in neither global nor local equilibrium. For example, it takes a certain number of collisions for a particle to equilibrate to its surroundings. If the average distance it has moved during these collisions removes it from the neighborhood it is equilibrating to, it will never equilibrate, and there will be no LTE. Temperature is, by definition, proportional to the average internal energy of an equilibrated neighborhood. Since there is no equilibrated neighborhood, the concept of temperature doesn't hold, and the temperature becomes undefined. This local equilibrium may apply only to a certain subset of particles in the system. For example, LTE is usually applied only to massive particles. In a radiating gas, the photons being emitted and absorbed by the gas do not need to be in a thermodynamic equilibrium with each other or with the massive particles of the gas for LTE to exist. In some cases, it is not considered necessary for free electrons to be in equilibrium with the much more massive atoms or molecules for LTE to exist. As an example, LTE will exist in a glass of water that contains a melting ice cube. The temperature inside the glass can be defined at any point, but it is colder near the ice cube than far away from it. If energies of the molecules located near a given point are observed, they will be distributed according to the Maxwell–Boltzmann distribution for a certain temperature. If the energies of the molecules located near another point are observed, they will be distributed according to the Maxwell–Boltzmann distribution for another temperature. Local thermodynamic equilibrium does not require either local or global stationarity. In other words, each small locality need not have a constant temperature. However, it does require that each small locality change slowly enough to practically sustain its local Maxwell–Boltzmann distribution of molecular velocities. A global non-equilibrium state can be stably stationary only if it is maintained by exchanges between the system and the outside. For example, a globally-stable stationary state could be maintained inside the glass of water by continuously adding finely powdered ice into it to compensate for the melting, and continuously draining off the meltwater. Natural transport phenomena may lead a system from local to global thermodynamic equilibrium. Going back to our example, the diffusion of heat will lead our glass of water toward global thermodynamic equilibrium, a state in which the temperature of the glass is completely homogeneous. Reservations Careful and well informed writers about thermodynamics, in their accounts of thermodynamic equilibrium, often enough make provisos or reservations to their statements. Some writers leave such reservations merely implied or more or less unstated. For example, one widely cited writer, H. B. Callen writes in this context: "In actuality, few systems are in absolute and true equilibrium." He refers to radioactive processes and remarks that they may take "cosmic times to complete, [and] generally can be ignored". He adds "In practice, the criterion for equilibrium is circular. Operationally, a system is in an equilibrium state if its properties are consistently described by thermodynamic theory!" J.A. Beattie and I. Oppenheim write: "Insistence on a strict interpretation of the definition of equilibrium would rule out the application of thermodynamics to practically all states of real systems." Another author, cited by Callen as giving a "scholarly and rigorous treatment", and cited by Adkins as having written a "classic text", A.B. Pippard writes in that text: "Given long enough a supercooled vapour will eventually condense, ... . The time involved may be so enormous, however, perhaps 10100 years or more, ... . For most purposes, provided the rapid change is not artificially stimulated, the systems may be regarded as being in equilibrium." Another author, A. Münster, writes in this context. He observes that thermonuclear processes often occur so slowly that they can be ignored in thermodynamics. He comments: "The concept 'absolute equilibrium' or 'equilibrium with respect to all imaginable processes', has therefore, no physical significance." He therefore states that: "... we can consider an equilibrium only with respect to specified processes and defined experimental conditions." According to L. Tisza: "... in the discussion of phenomena near absolute zero. The absolute predictions of the classical theory become particularly vague because the occurrence of frozen-in nonequilibrium states is very common." Definitions The most general kind of thermodynamic equilibrium of a system is through contact with the surroundings that allows simultaneous passages of all chemical substances and all kinds of energy. A system in thermodynamic equilibrium may move with uniform acceleration through space but must not change its shape or size while doing so; thus it is defined by a rigid volume in space. It may lie within external fields of force, determined by external factors of far greater extent than the system itself, so that events within the system cannot in an appreciable amount affect the external fields of force. The system can be in thermodynamic equilibrium only if the external force fields are uniform, and are determining its uniform acceleration, or if it lies in a non-uniform force field but is held stationary there by local forces, such as mechanical pressures, on its surface. Thermodynamic equilibrium is a primitive notion of the theory of thermodynamics. According to P.M. Morse: "It should be emphasized that the fact that there are thermodynamic states, ..., and the fact that there are thermodynamic variables which are uniquely specified by the equilibrium state ... are not conclusions deduced logically from some philosophical first principles. They are conclusions ineluctably drawn from more than two centuries of experiments." This means that thermodynamic equilibrium is not to be defined solely in terms of other theoretical concepts of thermodynamics. M. Bailyn proposes a fundamental law of thermodynamics that defines and postulates the existence of states of thermodynamic equilibrium. Textbook definitions of thermodynamic equilibrium are often stated carefully, with some reservation or other. For example, A. Münster writes: "An isolated system is in thermodynamic equilibrium when, in the system, no changes of state are occurring at a measurable rate." There are two reservations stated here; the system is isolated; any changes of state are immeasurably slow. He discusses the second proviso by giving an account of a mixture oxygen and hydrogen at room temperature in the absence of a catalyst. Münster points out that a thermodynamic equilibrium state is described by fewer macroscopic variables than is any other state of a given system. This is partly, but not entirely, because all flows within and through the system are zero. R. Haase's presentation of thermodynamics does not start with a restriction to thermodynamic equilibrium because he intends to allow for non-equilibrium thermodynamics. He considers an arbitrary system with time invariant properties. He tests it for thermodynamic equilibrium by cutting it off from all external influences, except external force fields. If after insulation, nothing changes, he says that the system was in equilibrium. In a section headed "Thermodynamic equilibrium", H.B. Callen defines equilibrium states in a paragraph. He points out that they "are determined by intrinsic factors" within the system. They are "terminal states", towards which the systems evolve, over time, which may occur with "glacial slowness". This statement does not explicitly say that for thermodynamic equilibrium, the system must be isolated; Callen does not spell out what he means by the words "intrinsic factors". Another textbook writer, C.J. Adkins, explicitly allows thermodynamic equilibrium to occur in a system which is not isolated. His system is, however, closed with respect to transfer of matter. He writes: "In general, the approach to thermodynamic equilibrium will involve both thermal and work-like interactions with the surroundings." He distinguishes such thermodynamic equilibrium from thermal equilibrium, in which only thermal contact is mediating transfer of energy. Another textbook author, J.R. Partington, writes: "(i) An equilibrium state is one which is independent of time." But, referring to systems "which are only apparently in equilibrium", he adds : "Such systems are in states of ″false equilibrium.″" Partington's statement does not explicitly state that the equilibrium refers to an isolated system. Like Münster, Partington also refers to the mixture of oxygen and hydrogen. He adds a proviso that "In a true equilibrium state, the smallest change of any external condition which influences the state will produce a small change of state ..." This proviso means that thermodynamic equilibrium must be stable against small perturbations; this requirement is essential for the strict meaning of thermodynamic equilibrium. A student textbook by F.H. Crawford has a section headed "Thermodynamic Equilibrium". It distinguishes several drivers of flows, and then says: "These are examples of the apparently universal tendency of isolated systems toward a state of complete mechanical, thermal, chemical, and electrical—or, in a single word, thermodynamic—equilibrium." A monograph on classical thermodynamics by H.A. Buchdahl considers the "equilibrium of a thermodynamic system", without actually writing the phrase "thermodynamic equilibrium". Referring to systems closed to exchange of matter, Buchdahl writes: "If a system is in a terminal condition which is properly static, it will be said to be in equilibrium." Buchdahl's monograph also discusses amorphous glass, for the purposes of thermodynamic description. It states: "More precisely, the glass may be regarded as being in equilibrium so long as experimental tests show that 'slow' transitions are in effect reversible." It is not customary to make this proviso part of the definition of thermodynamic equilibrium, but the converse is usually assumed: that if a body in thermodynamic equilibrium is subject to a sufficiently slow process, that process may be considered to be sufficiently nearly reversible, and the body remains sufficiently nearly in thermodynamic equilibrium during the process. A. Münster carefully extends his definition of thermodynamic equilibrium for isolated systems by introducing a concept of contact equilibrium. This specifies particular processes that are allowed when considering thermodynamic equilibrium for non-isolated systems, with special concern for open systems, which may gain or lose matter from or to their surroundings. A contact equilibrium is between the system of interest and a system in the surroundings, brought into contact with the system of interest, the contact being through a special kind of wall; for the rest, the whole joint system is isolated. Walls of this special kind were also considered by C. Carathéodory, and are mentioned by other writers also. They are selectively permeable. They may be permeable only to mechanical work, or only to heat, or only to some particular chemical substance. Each contact equilibrium defines an intensive parameter; for example, a wall permeable only to heat defines an empirical temperature. A contact equilibrium can exist for each chemical constituent of the system of interest. In a contact equilibrium, despite the possible exchange through the selectively permeable wall, the system of interest is changeless, as if it were in isolated thermodynamic equilibrium. This scheme follows the general rule that "... we can consider an equilibrium only with respect to specified processes and defined experimental conditions." Thermodynamic equilibrium for an open system means that, with respect to every relevant kind of selectively permeable wall, contact equilibrium exists when the respective intensive parameters of the system and surroundings are equal. This definition does not consider the most general kind of thermodynamic equilibrium, which is through unselective contacts. This definition does not simply state that no current of matter or energy exists in the interior or at the boundaries; but it is compatible with the following definition, which does so state. M. Zemansky also distinguishes mechanical, chemical, and thermal equilibrium. He then writes: "When the conditions for all three types of equilibrium are satisfied, the system is said to be in a state of thermodynamic equilibrium". P.M. Morse writes that thermodynamics is concerned with "states of thermodynamic equilibrium". He also uses the phrase "thermal equilibrium" while discussing transfer of energy as heat between a body and a heat reservoir in its surroundings, though not explicitly defining a special term 'thermal equilibrium'. J.R. Waldram writes of "a definite thermodynamic state". He defines the term "thermal equilibrium" for a system "when its observables have ceased to change over time". But shortly below that definition he writes of a piece of glass that has not yet reached its "full thermodynamic equilibrium state". Considering equilibrium states, M. Bailyn writes: "Each intensive variable has its own type of equilibrium." He then defines thermal equilibrium, mechanical equilibrium, and material equilibrium. Accordingly, he writes: "If all the intensive variables become uniform, thermodynamic equilibrium is said to exist." He is not here considering the presence of an external force field. J.G. Kirkwood and I. Oppenheim define thermodynamic equilibrium as follows: "A system is in a state of thermodynamic equilibrium if, during the time period allotted for experimentation, (a) its intensive properties are independent of time and (b) no current of matter or energy exists in its interior or at its boundaries with the surroundings." It is evident that they are not restricting the definition to isolated or to closed systems. They do not discuss the possibility of changes that occur with "glacial slowness", and proceed beyond the time period allotted for experimentation. They note that for two systems in contact, there exists a small subclass of intensive properties such that if all those of that small subclass are respectively equal, then all respective intensive properties are equal. States of thermodynamic equilibrium may be defined by this subclass, provided some other conditions are satisfied. Characteristics of a state of internal thermodynamic equilibrium Homogeneity in the absence of external forces A thermodynamic system consisting of a single phase in the absence of external forces, in its own internal thermodynamic equilibrium, is homogeneous. This means that the material in any small volume element of the system can be interchanged with the material of any other geometrically congruent volume element of the system, and the effect is to leave the system thermodynamically unchanged. In general, a strong external force field makes a system of a single phase in its own internal thermodynamic equilibrium inhomogeneous with respect to some intensive variables. For example, a relatively dense component of a mixture can be concentrated by centrifugation. Uniform temperature Such equilibrium inhomogeneity, induced by external forces, does not occur for the intensive variable temperature. According to E.A. Guggenheim, "The most important conception of thermodynamics is temperature." Planck introduces his treatise with a brief account of heat and temperature and thermal equilibrium, and then announces: "In the following we shall deal chiefly with homogeneous, isotropic bodies of any form, possessing throughout their substance the same temperature and density, and subject to a uniform pressure acting everywhere perpendicular to the surface." As did Carathéodory, Planck was setting aside surface effects and external fields and anisotropic crystals. Though referring to temperature, Planck did not there explicitly refer to the concept of thermodynamic equilibrium. In contrast, Carathéodory's scheme of presentation of classical thermodynamics for closed systems postulates the concept of an "equilibrium state" following Gibbs (Gibbs speaks routinely of a "thermodynamic state"), though not explicitly using the phrase 'thermodynamic equilibrium', nor explicitly postulating the existence of a temperature to define it. Although thermodynamic laws are immutable, systems can be created that delay the time to reach thermodynamic equilibrium. In a thought experiment, Reed A. Howald conceived of a system called "The Fizz Keeper"consisting of a cap with a nozzle that can re-pressurize any standard bottle of carbonated beverage. Nitrogen and oxygen, which air are mostly made out of, would keep getting pumped in, which would slow down the rate at which the carbon dioxide fizzles out of the system. This is possible because the thermodynamic equilibrium between the unconverted and converted carbon dioxide inside the bottle would stay the same. To come to this conclusion, he also appeals to Henry's Law, which states that gases dissolve in direct proportion to their partial pressures. By influencing the partial pressure on the top of a closed system, this would help slow down the rate of fizzing out of carbonated beverages which is governed by thermodynamic equilibrum. The equilibria of carbon dioxide and other gases would not change, however the partial pressure on top would slow down the rate of dissolution extending the time a gas stays in a particular state. due to the nature of thermal equilibrium of the remainder of the beverage. The equilibrium constant of carbon dioxide would be completely independent of the nitrogen and oxygen pumped into the system, which would slow down the diffusion of gas, and yet not have an impact on the thermodynamics of the entire system. The temperature within a system in thermodynamic equilibrium is uniform in space as well as in time. In a system in its own state of internal thermodynamic equilibrium, there are no net internal macroscopic flows. In particular, this means that all local parts of the system are in mutual radiative exchange equilibrium. This means that the temperature of the system is spatially uniform. This is so in all cases, including those of non-uniform external force fields. For an externally imposed gravitational field, this may be proved in macroscopic thermodynamic terms, by the calculus of variations, using the method of Langrangian multipliers. Considerations of kinetic theory or statistical mechanics also support this statement. In order that a system may be in its own internal state of thermodynamic equilibrium, it is of course necessary, but not sufficient, that it be in its own internal state of thermal equilibrium; it is possible for a system to reach internal mechanical equilibrium before it reaches internal thermal equilibrium. Number of real variables needed for specification In his exposition of his scheme of closed system equilibrium thermodynamics, C. Carathéodory initially postulates that experiment reveals that a definite number of real variables define the states that are the points of the manifold of equilibria. In the words of Prigogine and Defay (1945): "It is a matter of experience that when we have specified a certain number of macroscopic properties of a system, then all the other properties are fixed." As noted above, according to A. Münster, the number of variables needed to define a thermodynamic equilibrium is the least for any state of a given isolated system. As noted above, J.G. Kirkwood and I. Oppenheim point out that a state of thermodynamic equilibrium may be defined by a special subclass of intensive variables, with a definite number of members in that subclass. If the thermodynamic equilibrium lies in an external force field, it is only the temperature that can in general be expected to be spatially uniform. Intensive variables other than temperature will in general be non-uniform if the external force field is non-zero. In such a case, in general, additional variables are needed to describe the spatial non-uniformity. Stability against small perturbations As noted above, J.R. Partington points out that a state of thermodynamic equilibrium is stable against small transient perturbations. Without this condition, in general, experiments intended to study systems in thermodynamic equilibrium are in severe difficulties. Approach to thermodynamic equilibrium within an isolated system When a body of material starts from a non-equilibrium state of inhomogeneity or chemical non-equilibrium, and is then isolated, it spontaneously evolves towards its own internal state of thermodynamic equilibrium. It is not necessary that all aspects of internal thermodynamic equilibrium be reached simultaneously; some can be established before others. For example, in many cases of such evolution, internal mechanical equilibrium is established much more rapidly than the other aspects of the eventual thermodynamic equilibrium. Another example is that, in many cases of such evolution, thermal equilibrium is reached much more rapidly than chemical equilibrium. Fluctuations within an isolated system in its own internal thermodynamic equilibrium In an isolated system, thermodynamic equilibrium by definition persists over an indefinitely long time. In classical physics it is often convenient to ignore the effects of measurement and this is assumed in the present account. To consider the notion of fluctuations in an isolated thermodynamic system, a convenient example is a system specified by its extensive state variables, internal energy, volume, and mass composition. By definition they are time-invariant. By definition, they combine with time-invariant nominal values of their conjugate intensive functions of state, inverse temperature, pressure divided by temperature, and the chemical potentials divided by temperature, so as to exactly obey the laws of thermodynamics. But the laws of thermodynamics, combined with the values of the specifying extensive variables of state, are not sufficient to provide knowledge of those nominal values. Further information is needed, namely, of the constitutive properties of the system. It may be admitted that on repeated measurement of those conjugate intensive functions of state, they are found to have slightly different values from time to time. Such variability is regarded as due to internal fluctuations. The different measured values average to their nominal values. If the system is truly macroscopic as postulated by classical thermodynamics, then the fluctuations are too small to detect macroscopically. This is called the thermodynamic limit. In effect, the molecular nature of matter and the quantal nature of momentum transfer have vanished from sight, too small to see. According to Buchdahl: "... there is no place within the strictly phenomenological theory for the idea of fluctuations about equilibrium (see, however, Section 76)." If the system is repeatedly subdivided, eventually a system is produced that is small enough to exhibit obvious fluctuations. This is a mesoscopic level of investigation. The fluctuations are then directly dependent on the natures of the various walls of the system. The precise choice of independent state variables is then important. At this stage, statistical features of the laws of thermodynamics become apparent. If the mesoscopic system is further repeatedly divided, eventually a microscopic system is produced. Then the molecular character of matter and the quantal nature of momentum transfer become important in the processes of fluctuation. One has left the realm of classical or macroscopic thermodynamics, and one needs quantum statistical mechanics. The fluctuations can become relatively dominant, and questions of measurement become important. The statement that 'the system is its own internal thermodynamic equilibrium' may be taken to mean that 'indefinitely many such measurements have been taken from time to time, with no trend in time in the various measured values'. Thus the statement, that 'a system is in its own internal thermodynamic equilibrium, with stated nominal values of its functions of state conjugate to its specifying state variables', is far far more informative than a statement that 'a set of single simultaneous measurements of those functions of state have those same values'. This is because the single measurements might have been made during a slight fluctuation, away from another set of nominal values of those conjugate intensive functions of state, that is due to unknown and different constitutive properties. A single measurement cannot tell whether that might be so, unless there is also knowledge of the nominal values that belong to the equilibrium state. Thermal equilibrium An explicit distinction between 'thermal equilibrium' and 'thermodynamic equilibrium' is made by B. C. Eu. He considers two systems in thermal contact, one a thermometer, the other a system in which there are several occurring irreversible processes, entailing non-zero fluxes; the two systems are separated by a wall permeable only to heat. He considers the case in which, over the time scale of interest, it happens that both the thermometer reading and the irreversible processes are steady. Then there is thermal equilibrium without thermodynamic equilibrium. Eu proposes consequently that the zeroth law of thermodynamics can be considered to apply even when thermodynamic equilibrium is not present; also he proposes that if changes are occurring so fast that a steady temperature cannot be defined, then "it is no longer possible to describe the process by means of a thermodynamic formalism. In other words, thermodynamics has no meaning for such a process." This illustrates the importance for thermodynamics of the concept of temperature. Thermal equilibrium is achieved when two systems in thermal contact with each other cease to have a net exchange of energy. It follows that if two systems are in thermal equilibrium, then their temperatures are the same. Thermal equilibrium occurs when a system's macroscopic thermal observables have ceased to change with time. For example, an ideal gas whose distribution function has stabilised to a specific Maxwell–Boltzmann distribution would be in thermal equilibrium. This outcome allows a single temperature and pressure to be attributed to the whole system. For an isolated body, it is quite possible for mechanical equilibrium to be reached before thermal equilibrium is reached, but eventually, all aspects of equilibrium, including thermal equilibrium, are necessary for thermodynamic equilibrium. Non-equilibrium A system's internal state of thermodynamic equilibrium should be distinguished from a "stationary state" in which thermodynamic parameters are unchanging in time but the system is not isolated, so that there are, into and out of the system, non-zero macroscopic fluxes which are constant in time. Non-equilibrium thermodynamics is a branch of thermodynamics that deals with systems that are not in thermodynamic equilibrium. Most systems found in nature are not in thermodynamic equilibrium because they are changing or can be triggered to change over time, and are continuously and discontinuously subject to flux of matter and energy to and from other systems. The thermodynamic study of non-equilibrium systems requires more general concepts than are dealt with by equilibrium thermodynamics. Many natural systems still today remain beyond the scope of currently known macroscopic thermodynamic methods. Laws governing systems which are far from equilibrium are also debatable. One of the guiding principles for these systems is the maximum entropy production principle. It states that a non-equilibrium system evolves such as to maximize its entropy production. See also Thermodynamic models Non-random two-liquid model (NRTL model) - Phase equilibrium calculations UNIQUAC model - Phase equilibrium calculations Time crystal Topics in control theory Coefficient diagram method Control reconfiguration Feedback H infinity Hankel singular value Krener's theorem Lead-lag compensator Markov chain approximation method Minor loop feedback Multi-loop feedback Positive systems Radial basis function Root locus Signal-flow graphs Stable polynomial State space representation Steady state Transient state Underactuation Youla–Kucera parametrization Other related topics Automation and remote control Bond graph Control engineering Control–feedback–abort loop Controller (control theory) Cybernetics Intelligent control Mathematical system theory Negative feedback amplifier People in systems and control Perceptual control theory Systems theory Time scale calculus General references C. Michael Hogan, Leda C. Patmore and Harry Seidman (1973) Statistical Prediction of Dynamic Thermal Equilibrium Temperatures using Standard Meteorological Data Bases, Second Edition (EPA-660/2-73-003 2006) United States Environmental Protection Agency Office of Research and Development, Washington, D.C. Cesare Barbieri (2007) Fundamentals of Astronomy. First Edition (QB43.3.B37 2006) CRC Press , F. Mandl (1988) Statistical Physics, Second Edition, John Wiley & Sons Hans R. Griem (2005) Principles of Plasma Spectroscopy (Cambridge Monographs on Plasma Physics), Cambridge University Press, New York References Cited bibliography Adkins, C.J. (1968/1983). Equilibrium Thermodynamics, third edition, McGraw-Hill, London, . Bailyn, M. (1994). A Survey of Thermodynamics, American Institute of Physics Press, New York, . Beattie, J.A., Oppenheim, I. (1979). Principles of Thermodynamics, Elsevier Scientific Publishing, Amsterdam, . Boltzmann, L. (1896/1964). Lectures on Gas Theory, translated by S.G. Brush, University of California Press, Berkeley. Buchdahl, H.A. (1966). The Concepts of Classical Thermodynamics, Cambridge University Press, Cambridge UK. Callen, H.B. (1960/1985). Thermodynamics and an Introduction to Thermostatistics, (1st edition 1960) 2nd edition 1985, Wiley, New York, . Carathéodory, C. (1909). Untersuchungen über die Grundlagen der Thermodynamik, Mathematische Annalen, 67: 355–386. A translation may be found here. Also a mostly reliable translation is to be found at Kestin, J. (1976). The Second Law of Thermodynamics, Dowden, Hutchinson & Ross, Stroudsburg PA. Chapman, S., Cowling, T.G. (1939/1970). The Mathematical Theory of Non-uniform gases. An Account of the Kinetic Theory of Viscosity, Thermal Conduction and Diffusion in Gases, third edition 1970, Cambridge University Press, London. Crawford, F.H. (1963). Heat, Thermodynamics, and Statistical Physics, Rupert Hart-Davis, London, Harcourt, Brace & World, Inc. de Groot, S.R., Mazur, P. (1962). Non-equilibrium Thermodynamics, North-Holland, Amsterdam. Reprinted (1984), Dover Publications Inc., New York, . Denbigh, K.G. (1951). Thermodynamics of the Steady State, Methuen, London. Eu, B.C. (2002). Generalized Thermodynamics. The Thermodynamics of Irreversible Processes and Generalized Hydrodynamics, Kluwer Academic Publishers, Dordrecht, . Fitts, D.D. (1962). Nonequilibrium thermodynamics. A Phenomenological Theory of Irreversible Processes in Fluid Systems, McGraw-Hill, New York. Gibbs, J.W. (1876/1878). On the equilibrium of heterogeneous substances, Trans. Conn. Acad., 3: 108–248, 343–524, reprinted in The Collected Works of J. Willard Gibbs, PhD, LL. D., edited by W.R. Longley, R.G. Van Name, Longmans, Green & Co., New York, 1928, volume 1, pp. 55–353. Griem, H.R. (2005). Principles of Plasma Spectroscopy (Cambridge Monographs on Plasma Physics), Cambridge University Press, New York . Guggenheim, E.A. (1949/1967). Thermodynamics. An Advanced Treatment for Chemists and Physicists, fifth revised edition, North-Holland, Amsterdam. Haase, R. (1971). Survey of Fundamental Laws, chapter 1 of Thermodynamics, pages 1–97 of volume 1, ed. W. Jost, of Physical Chemistry. An Advanced Treatise, ed. H. Eyring, D. Henderson, W. Jost, Academic Press, New York, lcn 73–117081. Kirkwood, J.G., Oppenheim, I. (1961). Chemical Thermodynamics, McGraw-Hill Book Company, New York. Landsberg, P.T. (1961). Thermodynamics with Quantum Statistical Illustrations, Interscience, New York. Levine, I.N. (1983), Physical Chemistry, second edition, McGraw-Hill, New York, . Morse, P.M. (1969). Thermal Physics, second edition, W.A. Benjamin, Inc, New York. Münster, A. (1970). Classical Thermodynamics, translated by E.S. Halberstadt, Wiley–Interscience, London. Partington, J.R. (1949). An Advanced Treatise on Physical Chemistry, volume 1, Fundamental Principles. The Properties of Gases, Longmans, Green and Co., London. Pippard, A.B. (1957/1966). The Elements of Classical Thermodynamics, reprinted with corrections 1966, Cambridge University Press, London. Planck. M. (1914). The Theory of Heat Radiation, a translation by Masius, M. of the second German edition, P. Blakiston's Son & Co., Philadelphia. Prigogine, I. (1947). Étude Thermodynamique des Phénomènes irréversibles, Dunod, Paris, and Desoers, Liège. Prigogine, I., Defay, R. (1950/1954). Chemical Thermodynamics, Longmans, Green & Co, London. Silbey, R.J., Alberty, R.A., Bawendi, M.G. (1955/2005). Physical Chemistry, fourth edition, Wiley, Hoboken NJ. ter Haar, D., Wergeland, H. (1966). Elements of Thermodynamics, Addison-Wesley Publishing, Reading MA. Also published in Tisza, L. (1966). Generalized Thermodynamics, M.I.T Press, Cambridge MA. Uhlenbeck, G.E., Ford, G.W. (1963). Lectures in Statistical Mechanics, American Mathematical Society, Providence RI. Waldram, J.R. (1985). The Theory of Thermodynamics, Cambridge University Press, Cambridge UK, . Zemansky, M. (1937/1968). Heat and Thermodynamics. An Intermediate Textbook, fifth edition 1967, McGraw–Hill Book Company, New York. External links Breakdown of Local Thermodynamic Equilibrium George W. Collins, The Fundamentals of Stellar Astrophysics, Chapter 15 Local Thermodynamic Equilibrium Non-Local Thermodynamic Equilibrium in Cloudy Planetary Atmospheres Paper by R. E. Samueison quantifying the effects due to non-LTE in an atmosphere Thermodynamic Equilibrium, Local and otherwise lecture by Michael Richmond Equilibrium chemistry Thermodynamic cycles Thermodynamic processes Thermodynamic systems Thermodynamics
0.773642
0.995984
0.770535
Pathogenesis
In pathology, pathogenesis is the process by which a disease or disorder develops. It can include factors which contribute not only to the onset of the disease or disorder, but also to its progression and maintenance. The word comes . Description Types of pathogenesis include microbial infection, inflammation, malignancy and tissue breakdown. For example, bacterial pathogenesis is the process by which bacteria cause infectious illness. Most diseases are caused by multiple processes. For example, certain cancers arise from dysfunction of the immune system (skin tumors and lymphoma after a renal transplant, which requires immunosuppression), Streptococcus pneumoniae is spread through contact with respiratory secretions, such as saliva, mucus, or cough droplets from an infected person and colonizes the upper respiratory tract and begins to multiply. The pathogenic mechanisms of a disease (or condition) are set in motion by the underlying causes, which if controlled would allow the disease to be prevented. Often, a potential cause is identified by epidemiological observations before a pathological link can be drawn between the cause and the disease. The pathological perspective can be directly integrated into an epidemiological approach in the interdisciplinary field of molecular pathological epidemiology. Molecular pathological epidemiology can help to assess pathogenesis and causality by means of linking a potential risk factor to molecular pathologic signatures of a disease. Thus, the molecular pathological epidemiology paradigm can advance the area of causal inference. See also Causal inference Epidemiology Molecular pathological epidemiology Molecular pathology Pathology Pathophysiology Salutogenesis References Further reading Pathology
0.776377
0.992474
0.770534
Serine
Serine (symbol Ser or S) is an α-amino acid that is used in the biosynthesis of proteins. It contains an α-amino group (which is in the protonated − form under biological conditions), a carboxyl group (which is in the deprotonated − form under biological conditions), and a side chain consisting of a hydroxymethyl group, classifying it as a polar amino acid. It can be synthesized in the human body under normal physiological circumstances, making it a nonessential amino acid. It is encoded by the codons UCU, UCC, UCA, UCG, AGU and AGC. Occurrence This compound is one of the proteinogenic amino acids. Only the L-stereoisomer appears naturally in proteins. It is not essential to the human diet, since it is synthesized in the body from other metabolites, including glycine. Serine was first obtained from silk protein, a particularly rich source, in 1865 by Emil Cramer. Its name is derived from the Latin for silk, sericum. Serine's structure was established in 1902. Biosynthesis The biosynthesis of serine starts with the oxidation of 3-phosphoglycerate (an intermediate from glycolysis) to 3-phosphohydroxypyruvate and NADH by phosphoglycerate dehydrogenase. Reductive amination (transamination) of this ketone by phosphoserine transaminase yields 3-phosphoserine (O-phosphoserine) which is hydrolyzed to serine by phosphoserine phosphatase. In bacteria such as E. coli these enzymes are encoded by the genes serA (EC 1.1.1.95), serC (EC 2.6.1.52), and serB (EC 3.1.3.3). Serine hydroxymethyltransferase (SMHT) also catalyzes the biosynthesis of glycine (retro-aldol cleavage) from serine, transferring the resulting formalddehyde synthon to 5,6,7,8-tetrahydrofolate. However, that reaction is reversible, and will convert excess glycine to serine. SHMT is a pyridoxal phosphate (PLP) dependent enzyme. Synthesis and reactions Industrially, L-serine is produced from glycine and methanol catalyzed by hydroxymethyltransferase. Racemic serine can be prepared in the laboratory from methyl acrylate in several steps: Hydrogenation of serine gives the diol serinol: Biological function Metabolic Serine is important in metabolism in that it participates in the biosynthesis of purines and pyrimidines. It is the precursor to several amino acids including glycine and cysteine, as well as tryptophan in bacteria. It is also the precursor to numerous other metabolites, including sphingolipids and folate, which is the principal donor of one-carbon fragments in biosynthesis. Signaling D-Serine, synthesized in neurons by serine racemase from L-serine (its enantiomer), serves as a neuromodulator by coactivating NMDA receptors, making them able to open if they then also bind glutamate. D-serine is a potent agonist at the glycine site (NR1) of canonical diheteromeric NMDA receptors. For the receptor to open, glutamate and either glycine or D-serine must bind to it; in addition a pore blocker must not be bound (e.g. Mg2+ or Zn2+). Some research has shown that D-serine is a more potent agonist at the NMDAR glycine site than glycine itself. However, D-serine has been shown to work as an antagonist/inverse co-agonist of t-NMDA receptors through the glycine binding site on the GluN3 subunit. Ligands D-serine was thought to exist only in bacteria until relatively recently; it was the second D amino acid discovered to naturally exist in humans, present as a signaling molecule in the brain, soon after the discovery of D-aspartate. Had D amino acids been discovered in humans sooner, the glycine site on the NMDA receptor might instead be named the D-serine site. Apart from central nervous system, D-serine plays a signaling role in peripheral tissues and organs such as cartilage, kidney, and corpus cavernosum. Gustatory sensation Pure D-serine is an off-white crystalline powder with a very faint musty aroma. D-Serine is sweet with an additional minor sour taste at medium and high concentrations. Clinical significance Serine deficiency disorders are rare defects in the biosynthesis of the amino acid L-serine. At present three disorders have been reported: 3-phosphoglycerate dehydrogenase deficiency 3-phosphoserine phosphatase deficiency Phosphoserine aminotransferase deficiency These enzyme defects lead to severe neurological symptoms such as congenital microcephaly and severe psychomotor retardation and in addition, in patients with 3-phosphoglycerate dehydrogenase deficiency to intractable seizures. These symptoms respond to a variable degree to treatment with L-serine, sometimes combined with glycine. Response to treatment is variable and the long-term and functional outcome is unknown. To provide a basis for improving the understanding of the epidemiology, genotype/phenotype correlation and outcome of these diseases their impact on the quality of life of patients, as well as for evaluating diagnostic and therapeutic strategies a patient registry was established by the noncommercial International Working Group on Neurotransmitter Related Disorders (iNTD). Besides disruption of serine biosynthesis, its transport may also become disrupted. One example is spastic tetraplegia, thin corpus callosum, and progressive microcephaly, a disease caused by mutations that affect the function of the neutral amino acid transporter A. Research for therapeutic use The classification of L-serine as a non-essential amino acid has come to be considered as conditional, since vertebrates such as humans cannot always synthesize optimal quantities over entire lifespans. Safety of L-serine has been demonstrated in an FDA-approved human phase I clinical trial with Amyotrophic Lateral Sclerosis, ALS, patients (ClinicalTrials.gov identifier: NCT01835782), but treatment of ALS symptoms has yet to be shown. A 2011 meta-analysis found adjunctive sarcosine to have a medium effect size for negative and total symptoms of schizophrenia. There also is evidence that L‐serine could acquire a therapeutic role in diabetes. D-Serine is being studied in rodents as a potential treatment for schizophrenia. D-Serine also has been described as a potential biomarker for early Alzheimer's disease (AD) diagnosis, due to a relatively high concentration of it in the cerebrospinal fluid of probable AD patients. D-serine, which is made in the brain, has been shown to work as an antagonist/inverse co-agonist of t-NMDA receptors mitigating neuron loss in an animal model of temporal lobe epilepsy. D-Serine has been theorized as a potential treatment for sensorineural hearing disorders such as hearing loss and tinnitus. See also Isoserine Homoserine (isothreonine) Serine octamer cluster References External links Serine MS Spectrum Alpha-Amino acids Proteinogenic amino acids Glucogenic amino acids NMDA receptor agonists Glycine receptor agonists Aldols Amino alcohols Inhibitory amino acids
0.775068
0.994138
0.770525
Design science (methodology)
Design science research (DSR) is a research paradigm focusing on the development and validation of prescriptive knowledge in information science. Herbert Simon distinguished the natural sciences, concerned with explaining how things are, from design sciences which are concerned with how things ought to be, that is, with devising artifacts to attain goals. Design science research methodology (DSRM) refers to the research methodologies associated with this paradigm. It spans the methodologies of several research disciplines, for example information technology, which offers specific guidelines for evaluation and iteration within research projects. DSR focuses on the development and performance of (designed) artifacts with the explicit intention of improving the functional performance of the artifact. DSRM is typically applied to categories of artifacts including algorithms, human/computer interfaces, design methodologies (including process models) and languages. Its application is most notable in the Engineering and Computer Science disciplines, though is not restricted to these and can be found in many disciplines and fields. DSR, or constructive research, in contrast to explanatory science research, has academic research objectives generally of a more pragmatic nature. Research in these disciplines can be seen as a quest for understanding and improving human performance. Such renowned research institutions as the MIT Media Lab, Stanford University's Center for Design Research, Carnegie Mellon University's Software Engineering Institute, Xerox’s PARC, and Brunel University London’s Organisation and System Design Centre, use the DSR approach. Design science is a valid research methodology to develop solutions for practical engineering problems. Design science is particularly suitable for wicked problems. Objectives The main goal of DSR is to develop knowledge that professionals of the discipline in question can use to design solutions for their field problems. Design sciences focus on the process of making choices on what is possible and useful for the creation of possible futures, rather than on what is currently existing. This mission can be compared to the one of the ‘explanatory sciences’, like the natural sciences and sociology, which is to develop knowledge to describe, explain and predict. Hevner states that the main purpose of DSR is achieving knowledge and understanding of a problem domain by building and application of a designed artifact. Evolution and applications Since the first days of computer science, computer scientists have been doing DSR without naming it. They have developed new architectures for computers, new programming languages, new compilers, new algorithms, new data and file structures, new data models, new database management systems, and so on. Much of the early research was focused on systems development approaches and methods. The dominant research philosophy in many disciplines has focused on developing cumulative, theory-based research results in order to make prescriptions. It seems that this ‘theory-with-practical-implications’ research strategy has not delivered on this aim, which led to search for practical research methods such as DSR. Characteristics The design process is a sequence of expert activities that produces an innovative product. The artifact enables the researcher to get a better grasp of the problem; the re-evaluation of the problem improves the quality of the design process and so on. This build-and-evaluate loop is typically iterated a number of times before the final design artifact is generated. In DSR, the focus is on the so-called field-tested and grounded technological rule as a possible product of Mode 2 research with the potential to improve the relevance of academic research in management. Mode 1 knowledge production is purely academic and mono-disciplinary, while Mode 2 is multidisciplinary and aims at solving complex and relevant field problems. Guidelines in information systems research Hevner et al. have presented a set of guidelines for DSR within the discipline of Information Systems (IS). DSR requires the creation of an innovative, purposeful artifact for a special problem domain. The artifact must be evaluated in order to ensure its utility for the specified problem. In order to form a novel research contribution, the artifact must either solve a problem that has not yet been solved, or provide a more effective solution. Both the construction and evaluation of the artifact must be done rigorously, and the results of the research presented effectively both to technology-oriented and management-oriented audiences. Hevner counts 7 guidelines for a DSR: Design as an artifact: Design-science research must produce a viable artifact in the form of a construct, a model, a method, or an instantiation. Problem relevance: The objective of design-science research is to develop technology-based solutions to important and relevant business problems. Design evaluation: The utility, quality, and efficacy of a design artifact must be rigorously demonstrated via well-executed evaluation methods. Research contributions: Effective design-science research must provide clear and verifiable contributions in the areas of the design artifact, design foundations, and/or design methodologies. Research rigor: Design-science research relies upon the application of rigorous methods in both the construction and evaluation of the design artifact. Design as a search process: The search for an effective artifact requires utilizing available means to reach desired ends while satisfying laws in the problem environment. Communication of research: Design-science research must be presented effectively both to technology-oriented as well as management-oriented audiences. Transparency in DSR is becoming an emerging concern. DSR strives to be practical and relevant. Yet few researchers have examined the extent to which practitioners can meaningfully utilize theoretical knowledge produced by DSR in solving concrete real-world problems. There is a potential gulf between theoretical propositions and concrete issues faced in practice—a challenge known as design theory indeterminacy. Guidelines for addressing this challenges are provided in Lukyanenko et al. 2020. The engineering cycle and the design cycle The engineering cycle is a framework used in Design Science for Information Systems and Software Engineering, proposed by Roel Wieringa. Artifacts Artifacts within DSR are perceived to be knowledge containing. This knowledge ranges from the design logic, construction methods and tool to assumptions about the context in which the artifact is intended to function (Gregor, 2002). The creation and evaluation of artifacts thus forms an important part in the DSR process which was described by Hevner et al., (2004) and supported by March and Storey (2008) as revolving around “build and evaluate”. DSR artifacts can broadly include: models, methods, constructs, instantiations and design theories (March & Smith, 1995; Gregor 2002; March & Storey, 2008, Gregor and Hevner 2013), social innovations, new or previously unknown properties of technical/social/informational resources (March, Storey, 2008), new explanatory theories, new design and developments models and implementation processes or methods (Ellis & Levy 2010). A three-cycle view DSR can be seen as an embodiment of three closely related cycles of activities. The relevance cycle initiates DSR with an application context that not only provides the requirements for the research as inputs but also defines acceptance criteria for the ultimate evaluation of the research results. The rigor cycle provides past knowledge to the research project to ensure its innovation. It is incumbent upon the researchers to thoroughly research and reference the knowledge base in order to guarantee that the designs produced are research contributions and not routine designs based upon the application of well-known processes. The central design cycle iterates between the core activities of building and evaluating the design artifacts and processes of the research. Ethical issues DSR in itself implies an ethical change from describing and explaining of the existing world to shaping it. One can question the values of information system research, i.e., whose values and what values dominate it, emphasizing that research may openly or latently serve the interests of particular dominant groups. The interests served may be those of the host organization as perceived by its top management, those of information system users, those of information system professionals or potentially those of other stakeholder groups in society. Academic Examples of Design Science Research There are limited references to examples of DSR, but Adams has completed two PhD research topics using Peffers et al.'s DSRP (both associated with digital forensics but from different perspectives): 2013: The Advanced Data Acquisition Model (ADAM): A process model for digital forensic practice 2024: The Advanced Framework for Evaluating Remote Agents (AFERA): A Framework for Digital Forensic Practitioners See also Empirical research Action research Participant observation Case study Design thinking References Research examples Adams, R., Hobbs, V., Mann, G., (2013). The Advanced Data Acquisition Model (ADAM): A process model for digital forensic practice. URL: http://researchrepository.murdoch.edu.au/id/eprint/14422/2/02Whole.pdf Further reading March, S. T., Smith, G. F., (1995). Design and natural science research on information technology. Decision Support Systems, 15(4), pp. 251–266. March, S. T., Storey, V. C., (2008). Design Science in the Information Systems Discipline: An introduction to the special issue on design science research, MIS Quarterly, Vol. 32(4), pp. 725–730. Opdenakker, Raymond en Carin Cuijpers (2019),’Effective Virtual Project Teams: A Design Science Approach to Building a Strategic Momentum’, Springer Verlag. Van Aken, J. E. (2004). Management Research Based on the Paradigm of the Design Sciences: The Quest for Field-Tested and Grounded Technological Rules. Journal of Management Studies, 41(2), 219–246. Watts S, Shankaranarayanan G., Even A. Data quality assessment in context: A cognitive perspective. Decis Support Syst. 2009;48(1):202-211. External links Design Science Research in Information System and Technology community Research methods
0.78149
0.985892
0.770465
The three Rs
The three Rs are three basic skills taught in schools: reading, writing and arithmetic (the "R's", pronounced in the English alphabet "ARs", refer to "Reading, wRiting (where the W is unnecessary), and ARithmetic"). The phrase appears to have been coined at the beginning of the 19th century. The term has also been used to name other triples (see Other uses). Origin and meaning The skills themselves are alluded to in St. Augustine's Confessions: 'learning to read, and write, and do arithmetic'. The phrase is sometimes attributed to a speech given by Sir William Curtis circa 1807: this is disputed. An extended modern version of the three Rs consists of the "functional skills of literacy, numeracy and ICT". The educationalist Louis P. Bénézet preferred "to read", "to reason", "to recite", adding, "by reciting I did not mean giving back, verbatim, the words of the teacher or of the textbook. I meant speaking the English language." Other uses More recent meanings of "the three Rs" are: In the subject of CNC code generation by Edgecam Workflow: Rapid, Reliable, and Repeatable In the subject of sustainability: Reduce, Reuse, and Recycle In the subject of American politics and the New Deal: Relief, Recovery, and Reform In animal welfare principles in research (see The Three Rs for animals). The Three Rs principle stands for Reduction, Refinement, and Replacement. It promotes the use of alternative methods whenever possible, reducing the number of animals used, refining the experimental techniques to minimize harm, and replacing animals with non-animal models when feasible (See also 3R disambiguation) See also Standards based education reform Traditional education Trivium (education) Notes Education reform Latin words and phrases
0.77682
0.991817
0.770464
Logic
Logic is the study of correct reasoning. It includes both formal and informal logic. Formal logic is the study of deductively valid inferences or logical truths. It examines how conclusions follow from premises based on the structure of arguments alone, independent of their topic and content. Informal logic is associated with informal fallacies, critical thinking, and argumentation theory. Informal logic examines arguments expressed in natural language whereas formal logic uses formal language. When used as a countable noun, the term "a logic" refers to a specific logical formal system that articulates a proof system. Logic plays a central role in many fields, such as philosophy, mathematics, computer science, and linguistics. Logic studies arguments, which consist of a set of premises that leads to a conclusion. An example is the argument from the premises "it's Sunday" and "if it's Sunday then I don't have to work" leading to the conclusion "I don't have to work". Premises and conclusions express propositions or claims that can be true or false. An important feature of propositions is their internal structure. For example, complex propositions are made up of simpler propositions linked by logical vocabulary like (and) or (if...then). Simple propositions also have parts, like "Sunday" or "work" in the example. The truth of a proposition usually depends on the meanings of all of its parts. However, this is not the case for logically true propositions. They are true only because of their logical structure independent of the specific meanings of the individual parts. Arguments can be either correct or incorrect. An argument is correct if its premises support its conclusion. Deductive arguments have the strongest form of support: if their premises are true then their conclusion must also be true. This is not the case for ampliative arguments, which arrive at genuinely new information not found in the premises. Many arguments in everyday discourse and the sciences are ampliative arguments. They are divided into inductive and abductive arguments. Inductive arguments are statistical generalizations, such as inferring that all ravens are black based on many individual observations of black ravens. Abductive arguments are inferences to the best explanation, for example, when a doctor concludes that a patient has a certain disease which explains the symptoms they suffer. Arguments that fall short of the standards of correct reasoning often embody fallacies. Systems of logic are theoretical frameworks for assessing the correctness of arguments. Logic has been studied since antiquity. Early approaches include Aristotelian logic, Stoic logic, Nyaya, and Mohism. Aristotelian logic focuses on reasoning in the form of syllogisms. It was considered the main system of logic in the Western world until it was replaced by modern formal logic, which has its roots in the work of late 19th-century mathematicians such as Gottlob Frege. Today, the most commonly used system is classical logic. It consists of propositional logic and first-order logic. Propositional logic only considers logical relations between full propositions. First-order logic also takes the internal parts of propositions into account, like predicates and quantifiers. Extended logics accept the basic intuitions behind classical logic and apply it to other fields, such as metaphysics, ethics, and epistemology. Deviant logics, on the other hand, reject certain classical intuitions and provide alternative explanations of the basic laws of logic. Definition The word "logic" originates from the Greek word "logos", which has a variety of translations, such as reason, discourse, or language. Logic is traditionally defined as the study of the laws of thought or correct reasoning, and is usually understood in terms of inferences or arguments. Reasoning is the activity of drawing inferences. Arguments are the outward expression of inferences. An argument is a set of premises together with a conclusion. Logic is interested in whether arguments are correct, i.e. whether their premises support the conclusion. These general characterizations apply to logic in the widest sense, i.e., to both formal and informal logic since they are both concerned with assessing the correctness of arguments. Formal logic is the traditionally dominant field, and some logicians restrict logic to formal logic. Formal logic Formal logic is also known as symbolic logic and is widely used in mathematical logic. It uses a formal approach to study reasoning: it replaces concrete expressions with abstract symbols to examine the logical form of arguments independent of their concrete content. In this sense, it is topic-neutral since it is only concerned with the abstract structure of arguments and not with their concrete content. Formal logic is interested in deductively valid arguments, for which the truth of their premises ensures the truth of their conclusion. This means that it is impossible for the premises to be true and the conclusion to be false. For valid arguments, the logical structure of the premises and the conclusion follows a pattern called a rule of inference. For example, modus ponens is a rule of inference according to which all arguments of the form "(1) p, (2) if p then q, (3) therefore q" are valid, independent of what the terms p and q stand for. In this sense, formal logic can be defined as the science of valid inferences. An alternative definition sees logic as the study of logical truths. A proposition is logically true if its truth depends only on the logical vocabulary used in it. This means that it is true in all possible worlds and under all interpretations of its non-logical terms, like the claim "either it is raining, or it is not". These two definitions of formal logic are not identical, but they are closely related. For example, if the inference from p to q is deductively valid then the claim "if p then q" is a logical truth. Formal logic uses formal languages to express and analyze arguments. They normally have a very limited vocabulary and exact syntactic rules. These rules specify how their symbols can be combined to construct sentences, so-called well-formed formulas. This simplicity and exactness of formal logic make it capable of formulating precise rules of inference. They determine whether a given argument is valid. Because of the reliance on formal language, natural language arguments cannot be studied directly. Instead, they need to be translated into formal language before their validity can be assessed. The term "logic" can also be used in a slightly different sense as a countable noun. In this sense, a logic is a logical formal system. Distinct logics differ from each other concerning the rules of inference they accept as valid and the formal languages used to express them. Starting in the late 19th century, many new formal systems have been proposed. There are disagreements about what makes a formal system a logic. For example, it has been suggested that only logically complete systems, like first-order logic, qualify as logics. For such reasons, some theorists deny that higher-order logics are logics in the strict sense. Informal logic When understood in a wide sense, logic encompasses both formal and informal logic. Informal logic uses non-formal criteria and standards to analyze and assess the correctness of arguments. Its main focus is on everyday discourse. Its development was prompted by difficulties in applying the insights of formal logic to natural language arguments. In this regard, it considers problems that formal logic on its own is unable to address. Both provide criteria for assessing the correctness of arguments and distinguishing them from fallacies. Many characterizations of informal logic have been suggested but there is no general agreement on its precise definition. The most literal approach sees the terms "formal" and "informal" as applying to the language used to express arguments. On this view, informal logic studies arguments that are in informal or natural language. Formal logic can only examine them indirectly by translating them first into a formal language while informal logic investigates them in their original form. On this view, the argument "Birds fly. Tweety is a bird. Therefore, Tweety flies." belongs to natural language and is examined by informal logic. But the formal translation "(1) ; (2) ; (3) " is studied by formal logic. The study of natural language arguments comes with various difficulties. For example, natural language expressions are often ambiguous, vague, and context-dependent. Another approach defines informal logic in a wide sense as the normative study of the standards, criteria, and procedures of argumentation. In this sense, it includes questions about the role of rationality, critical thinking, and the psychology of argumentation. Another characterization identifies informal logic with the study of non-deductive arguments. In this way, it contrasts with deductive reasoning examined by formal logic. Non-deductive arguments make their conclusion probable but do not ensure that it is true. An example is the inductive argument from the empirical observation that "all ravens I have seen so far are black" to the conclusion "all ravens are black". A further approach is to define informal logic as the study of informal fallacies. Informal fallacies are incorrect arguments in which errors are present in the content and the context of the argument. A false dilemma, for example, involves an error of content by excluding viable options. This is the case in the fallacy "you are either with us or against us; you are not with us; therefore, you are against us". Some theorists state that formal logic studies the general form of arguments while informal logic studies particular instances of arguments. Another approach is to hold that formal logic only considers the role of logical constants for correct inferences while informal logic also takes the meaning of substantive concepts into account. Further approaches focus on the discussion of logical topics with or without formal devices and on the role of epistemology for the assessment of arguments. Basic concepts Premises, conclusions, and truth Premises and conclusions Premises and conclusions are the basic parts of inferences or arguments and therefore play a central role in logic. In the case of a valid inference or a correct argument, the conclusion follows from the premises, or in other words, the premises support the conclusion. For instance, the premises "Mars is red" and "Mars is a planet" support the conclusion "Mars is a red planet". For most types of logic, it is accepted that premises and conclusions have to be truth-bearers. This means that they have a truth value: they are either true or false. Contemporary philosophy generally sees them either as propositions or as sentences. Propositions are the denotations of sentences and are usually seen as abstract objects. For example, the English sentence "the tree is green" is different from the German sentence "der Baum ist grün" but both express the same proposition. Propositional theories of premises and conclusions are often criticized because they rely on abstract objects. For instance, philosophical naturalists usually reject the existence of abstract objects. Other arguments concern the challenges involved in specifying the identity criteria of propositions. These objections are avoided by seeing premises and conclusions not as propositions but as sentences, i.e. as concrete linguistic objects like the symbols displayed on a page of a book. But this approach comes with new problems of its own: sentences are often context-dependent and ambiguous, meaning an argument's validity would not only depend on its parts but also on its context and on how it is interpreted. Another approach is to understand premises and conclusions in psychological terms as thoughts or judgments. This position is known as psychologism. It was discussed at length around the turn of the 20th century but it is not widely accepted today. Internal structure Premises and conclusions have an internal structure. As propositions or sentences, they can be either simple or complex. A complex proposition has other propositions as its constituents, which are linked to each other through propositional connectives like "and" or "if...then". Simple propositions, on the other hand, do not have propositional parts. But they can also be conceived as having an internal structure: they are made up of subpropositional parts, like singular terms and predicates. For example, the simple proposition "Mars is red" can be formed by applying the predicate "red" to the singular term "Mars". In contrast, the complex proposition "Mars is red and Venus is white" is made up of two simple propositions connected by the propositional connective "and". Whether a proposition is true depends, at least in part, on its constituents. For complex propositions formed using truth-functional propositional connectives, their truth only depends on the truth values of their parts. But this relation is more complicated in the case of simple propositions and their subpropositional parts. These subpropositional parts have meanings of their own, like referring to objects or classes of objects. Whether the simple proposition they form is true depends on their relation to reality, i.e. what the objects they refer to are like. This topic is studied by theories of reference. Logical truth Some complex propositions are true independently of the substantive meanings of their parts. In classical logic, for example, the complex proposition "either Mars is red or Mars is not red" is true independent of whether its parts, like the simple proposition "Mars is red", are true or false. In such cases, the truth is called a logical truth: a proposition is logically true if its truth depends only on the logical vocabulary used in it. This means that it is true under all interpretations of its non-logical terms. In some modal logics, this means that the proposition is true in all possible worlds. Some theorists define logic as the study of logical truths. Truth tables Truth tables can be used to show how logical connectives work or how the truth values of complex propositions depends on their parts. They have a column for each input variable. Each row corresponds to one possible combination of the truth values these variables can take; for truth tables presented in the English literature, the symbols "T" and "F" or "1" and "0" are commonly used as abbreviations for the truth values "true" and "false". The first columns present all the possible truth-value combinations for the input variables. Entries in the other columns present the truth values of the corresponding expressions as determined by the input values. For example, the expression uses the logical connective (and). It could be used to express a sentence like "yesterday was Sunday and the weather was good". It is only true if both of its input variables, ("yesterday was Sunday") and ("the weather was good"), are true. In all other cases, the expression as a whole is false. Other important logical connectives are (not), (or), (if...then), and (Sheffer stroke). Given the conditional proposition , one can form truth tables of its converse , its inverse , and its contrapositive . Truth tables can also be defined for more complex expressions that use several propositional connectives. Arguments and inferences Logic is commonly defined in terms of arguments or inferences as the study of their correctness. An argument is a set of premises together with a conclusion. An inference is the process of reasoning from these premises to the conclusion. But these terms are often used interchangeably in logic. Arguments are correct or incorrect depending on whether their premises support their conclusion. Premises and conclusions, on the other hand, are true or false depending on whether they are in accord with reality. In formal logic, a sound argument is an argument that is both correct and has only true premises. Sometimes a distinction is made between simple and complex arguments. A complex argument is made up of a chain of simple arguments. This means that the conclusion of one argument acts as a premise of later arguments. For a complex argument to be successful, each link of the chain has to be successful. Arguments and inferences are either correct or incorrect. If they are correct then their premises support their conclusion. In the incorrect case, this support is missing. It can take different forms corresponding to the different types of reasoning. The strongest form of support corresponds to deductive reasoning. But even arguments that are not deductively valid may still be good arguments because their premises offer non-deductive support to their conclusions. For such cases, the term ampliative or inductive reasoning is used. Deductive arguments are associated with formal logic in contrast to the relation between ampliative arguments and informal logic. Deductive A deductively valid argument is one whose premises guarantee the truth of its conclusion. For instance, the argument "(1) all frogs are amphibians; (2) no cats are amphibians; (3) therefore no cats are frogs" is deductively valid. For deductive validity, it does not matter whether the premises or the conclusion are actually true. So the argument "(1) all frogs are mammals; (2) no cats are mammals; (3) therefore no cats are frogs" is also valid because the conclusion follows necessarily from the premises. According to an influential view by Alfred Tarski, deductive arguments have three essential features: (1) they are formal, i.e. they depend only on the form of the premises and the conclusion; (2) they are a priori, i.e. no sense experience is needed to determine whether they obtain; (3) they are modal, i.e. that they hold by logical necessity for the given propositions, independent of any other circumstances. Because of the first feature, the focus on formality, deductive inference is usually identified with rules of inference. Rules of inference specify the form of the premises and the conclusion: how they have to be structured for the inference to be valid. Arguments that do not follow any rule of inference are deductively invalid. The modus ponens is a prominent rule of inference. It has the form "p; if p, then q; therefore q". Knowing that it has just rained and that after rain the streets are wet, one can use modus ponens to deduce that the streets are wet. The third feature can be expressed by stating that deductively valid inferences are truth-preserving: it is impossible for the premises to be true and the conclusion to be false. Because of this feature, it is often asserted that deductive inferences are uninformative since the conclusion cannot arrive at new information not already present in the premises. But this point is not always accepted since it would mean, for example, that most of mathematics is uninformative. A different characterization distinguishes between surface and depth information. The surface information of a sentence is the information it presents explicitly. Depth information is the totality of the information contained in the sentence, both explicitly and implicitly. According to this view, deductive inferences are uninformative on the depth level. But they can be highly informative on the surface level by making implicit information explicit. This happens, for example, in mathematical proofs. Ampliative Ampliative arguments are arguments whose conclusions contain additional information not found in their premises. In this regard, they are more interesting since they contain information on the depth level and the thinker may learn something genuinely new. But this feature comes with a certain cost: the premises support the conclusion in the sense that they make its truth more likely but they do not ensure its truth. This means that the conclusion of an ampliative argument may be false even though all its premises are true. This characteristic is closely related to non-monotonicity and defeasibility: it may be necessary to retract an earlier conclusion upon receiving new information or in light of new inferences drawn. Ampliative reasoning plays a central role in many arguments found in everyday discourse and the sciences. Ampliative arguments are not automatically incorrect. Instead, they just follow different standards of correctness. The support they provide for their conclusion usually comes in degrees. This means that strong ampliative arguments make their conclusion very likely while weak ones are less certain. As a consequence, the line between correct and incorrect arguments is blurry in some cases, such as when the premises offer weak but non-negligible support. This contrasts with deductive arguments, which are either valid or invalid with nothing in-between. The terminology used to categorize ampliative arguments is inconsistent. Some authors, like James Hawthorne, use the term "induction" to cover all forms of non-deductive arguments. But in a more narrow sense, induction is only one type of ampliative argument alongside abductive arguments. Some philosophers, like Leo Groarke, also allow conductive arguments as another type. In this narrow sense, induction is often defined as a form of statistical generalization. In this case, the premises of an inductive argument are many individual observations that all show a certain pattern. The conclusion then is a general law that this pattern always obtains. In this sense, one may infer that "all elephants are gray" based on one's past observations of the color of elephants. A closely related form of inductive inference has as its conclusion not a general law but one more specific instance, as when it is inferred that an elephant one has not seen yet is also gray. Some theorists, like Igor Douven, stipulate that inductive inferences rest only on statistical considerations. This way, they can be distinguished from abductive inference. Abductive inference may or may not take statistical observations into consideration. In either case, the premises offer support for the conclusion because the conclusion is the best explanation of why the premises are true. In this sense, abduction is also called the inference to the best explanation. For example, given the premise that there is a plate with breadcrumbs in the kitchen in the early morning, one may infer the conclusion that one's house-mate had a midnight snack and was too tired to clean the table. This conclusion is justified because it is the best explanation of the current state of the kitchen. For abduction, it is not sufficient that the conclusion explains the premises. For example, the conclusion that a burglar broke into the house last night, got hungry on the job, and had a midnight snack, would also explain the state of the kitchen. But this conclusion is not justified because it is not the best or most likely explanation. Fallacies Not all arguments live up to the standards of correct reasoning. When they do not, they are usually referred to as fallacies. Their central aspect is not that their conclusion is false but that there is some flaw with the reasoning leading to this conclusion. So the argument "it is sunny today; therefore spiders have eight legs" is fallacious even though the conclusion is true. Some theorists, like John Stuart Mill, give a more restrictive definition of fallacies by additionally requiring that they appear to be correct. This way, genuine fallacies can be distinguished from mere mistakes of reasoning due to carelessness. This explains why people tend to commit fallacies: because they have an alluring element that seduces people into committing and accepting them. However, this reference to appearances is controversial because it belongs to the field of psychology, not logic, and because appearances may be different for different people. Fallacies are usually divided into formal and informal fallacies. For formal fallacies, the source of the error is found in the form of the argument. For example, denying the antecedent is one type of formal fallacy, as in "if Othello is a bachelor, then he is male; Othello is not a bachelor; therefore Othello is not male". But most fallacies fall into the category of informal fallacies, of which a great variety is discussed in the academic literature. The source of their error is usually found in the content or the context of the argument. Informal fallacies are sometimes categorized as fallacies of ambiguity, fallacies of presumption, or fallacies of relevance. For fallacies of ambiguity, the ambiguity and vagueness of natural language are responsible for their flaw, as in "feathers are light; what is light cannot be dark; therefore feathers cannot be dark". Fallacies of presumption have a wrong or unjustified premise but may be valid otherwise. In the case of fallacies of relevance, the premises do not support the conclusion because they are not relevant to it. Definitory and strategic rules The main focus of most logicians is to study the criteria according to which an argument is correct or incorrect. A fallacy is committed if these criteria are violated. In the case of formal logic, they are known as rules of inference. They are definitory rules, which determine whether an inference is correct or which inferences are allowed. Definitory rules contrast with strategic rules. Strategic rules specify which inferential moves are necessary to reach a given conclusion based on a set of premises. This distinction does not just apply to logic but also to games. In chess, for example, the definitory rules dictate that bishops may only move diagonally. The strategic rules, on the other hand, describe how the allowed moves may be used to win a game, for instance, by controlling the center and by defending one's king. It has been argued that logicians should give more emphasis to strategic rules since they are highly relevant for effective reasoning. Formal systems A formal system of logic consists of a formal language together with a set of axioms and a proof system used to draw inferences from these axioms. In logic, axioms are statements that are accepted without proof. They are used to justify other statements. Some theorists also include a semantics that specifies how the expressions of the formal language relate to real objects. Starting in the late 19th century, many new formal systems have been proposed. A formal language consists of an alphabet and syntactic rules. The alphabet is the set of basic symbols used in expressions. The syntactic rules determine how these symbols may be arranged to result in well-formed formulas. For instance, the syntactic rules of propositional logic determine that is a well-formed formula but is not since the logical conjunction requires terms on both sides. A proof system is a collection of rules to construct formal proofs. It is a tool to arrive at conclusions from a set of axioms. Rules in a proof system are defined in terms of the syntactic form of formulas independent of their specific content. For instance, the classical rule of conjunction introduction states that follows from the premises and . Such rules can be applied sequentially, giving a mechanical procedure for generating conclusions from premises. There are different types of proof systems including natural deduction and sequent calculi. A semantics is a system for mapping expressions of a formal language to their denotations. In many systems of logic, denotations are truth values. For instance, the semantics for classical propositional logic assigns the formula the denotation "true" whenever and are true. From the semantic point of view, a premise entails a conclusion if the conclusion is true whenever the premise is true. A system of logic is sound when its proof system cannot derive a conclusion from a set of premises unless it is semantically entailed by them. In other words, its proof system cannot lead to false conclusions, as defined by the semantics. A system is complete when its proof system can derive every conclusion that is semantically entailed by its premises. In other words, its proof system can lead to any true conclusion, as defined by the semantics. Thus, soundness and completeness together describe a system whose notions of validity and entailment line up perfectly. Systems of logic Systems of logic are theoretical frameworks for assessing the correctness of reasoning and arguments. For over two thousand years, Aristotelian logic was treated as the canon of logic in the Western world, but modern developments in this field have led to a vast proliferation of logical systems. One prominent categorization divides modern formal logical systems into classical logic, extended logics, and deviant logics. Aristotelian Aristotelian logic encompasses a great variety of topics. They include metaphysical theses about ontological categories and problems of scientific explanation. But in a more narrow sense, it is identical to term logic or syllogistics. A syllogism is a form of argument involving three propositions: two premises and a conclusion. Each proposition has three essential parts: a subject, a predicate, and a copula connecting the subject to the predicate. For example, the proposition "Socrates is wise" is made up of the subject "Socrates", the predicate "wise", and the copula "is". The subject and the predicate are the terms of the proposition. Aristotelian logic does not contain complex propositions made up of simple propositions. It differs in this aspect from propositional logic, in which any two propositions can be linked using a logical connective like "and" to form a new complex proposition. In Aristotelian logic, the subject can be universal, particular, indefinite, or singular. For example, the term "all humans" is a universal subject in the proposition "all humans are mortal". A similar proposition could be formed by replacing it with the particular term "some humans", the indefinite term "a human", or the singular term "Socrates". Aristotelian logic only includes predicates for simple properties of entities. But it lacks predicates corresponding to relations between entities. The predicate can be linked to the subject in two ways: either by affirming it or by denying it. For example, the proposition "Socrates is not a cat" involves the denial of the predicate "cat" to the subject "Socrates". Using combinations of subjects and predicates, a great variety of propositions and syllogisms can be formed. Syllogisms are characterized by the fact that the premises are linked to each other and to the conclusion by sharing one predicate in each case. Thus, these three propositions contain three predicates, referred to as major term, minor term, and middle term. The central aspect of Aristotelian logic involves classifying all possible syllogisms into valid and invalid arguments according to how the propositions are formed. For example, the syllogism "all men are mortal; Socrates is a man; therefore Socrates is mortal" is valid. The syllogism "all cats are mortal; Socrates is mortal; therefore Socrates is a cat", on the other hand, is invalid. Classical Classical logic is distinct from traditional or Aristotelian logic. It encompasses propositional logic and first-order logic. It is "classical" in the sense that it is based on basic logical intuitions shared by most logicians. These intuitions include the law of excluded middle, the double negation elimination, the principle of explosion, and the bivalence of truth. It was originally developed to analyze mathematical arguments and was only later applied to other fields as well. Because of this focus on mathematics, it does not include logical vocabulary relevant to many other topics of philosophical importance. Examples of concepts it overlooks are the contrast between necessity and possibility and the problem of ethical obligation and permission. Similarly, it does not address the relations between past, present, and future. Such issues are addressed by extended logics. They build on the basic intuitions of classical logic and expand it by introducing new logical vocabulary. This way, the exact logical approach is applied to fields like ethics or epistemology that lie beyond the scope of mathematics. Propositional logic Propositional logic comprises formal systems in which formulae are built from atomic propositions using logical connectives. For instance, propositional logic represents the conjunction of two atomic propositions and as the complex formula . Unlike predicate logic where terms and predicates are the smallest units, propositional logic takes full propositions with truth values as its most basic component. Thus, propositional logics can only represent logical relationships that arise from the way complex propositions are built from simpler ones. But it cannot represent inferences that result from the inner structure of a proposition. First-order logic First-order logic includes the same propositional connectives as propositional logic but differs from it because it articulates the internal structure of propositions. This happens through devices such as singular terms, which refer to particular objects, predicates, which refer to properties and relations, and quantifiers, which treat notions like "some" and "all". For example, to express the proposition "this raven is black", one may use the predicate for the property "black" and the singular term referring to the raven to form the expression . To express that some objects are black, the existential quantifier is combined with the variable to form the proposition . First-order logic contains various rules of inference that determine how expressions articulated this way can form valid arguments, for example, that one may infer from . Extended Extended logics are logical systems that accept the basic principles of classical logic. They introduce additional symbols and principles to apply it to fields like metaphysics, ethics, and epistemology. Modal logic Modal logic is an extension of classical logic. In its original form, sometimes called "alethic modal logic", it introduces two new symbols: expresses that something is possible while expresses that something is necessary. For example, if the formula stands for the sentence "Socrates is a banker" then the formula articulates the sentence "It is possible that Socrates is a banker". To include these symbols in the logical formalism, modal logic introduces new rules of inference that govern what role they play in inferences. One rule of inference states that, if something is necessary, then it is also possible. This means that follows from . Another principle states that if a proposition is necessary then its negation is impossible and vice versa. This means that is equivalent to . Other forms of modal logic introduce similar symbols but associate different meanings with them to apply modal logic to other fields. For example, deontic logic concerns the field of ethics and introduces symbols to express the ideas of obligation and permission, i.e. to describe whether an agent has to perform a certain action or is allowed to perform it. The modal operators in temporal modal logic articulate temporal relations. They can be used to express, for example, that something happened at one time or that something is happening all the time. In epistemology, epistemic modal logic is used to represent the ideas of knowing something in contrast to merely believing it to be the case. Higher order logic Higher-order logics extend classical logic not by using modal operators but by introducing new forms of quantification. Quantifiers correspond to terms like "all" or "some". In classical first-order logic, quantifiers are only applied to individuals. The formula (some apples are sweet) is an example of the existential quantifier applied to the individual variable . In higher-order logics, quantification is also allowed over predicates. This increases its expressive power. For example, to express the idea that Mary and John share some qualities, one could use the formula . In this case, the existential quantifier is applied to the predicate variable . The added expressive power is especially useful for mathematics since it allows for more succinct formulations of mathematical theories. But it has drawbacks in regard to its meta-logical properties and ontological implications, which is why first-order logic is still more commonly used. Deviant Deviant logics are logical systems that reject some of the basic intuitions of classical logic. Because of this, they are usually seen not as its supplements but as its rivals. Deviant logical systems differ from each other either because they reject different classical intuitions or because they propose different alternatives to the same issue. Intuitionistic logic is a restricted version of classical logic. It uses the same symbols but excludes some rules of inference. For example, according to the law of double negation elimination, if a sentence is not not true, then it is true. This means that follows from . This is a valid rule of inference in classical logic but it is invalid in intuitionistic logic. Another classical principle not part of intuitionistic logic is the law of excluded middle. It states that for every sentence, either it or its negation is true. This means that every proposition of the form is true. These deviations from classical logic are based on the idea that truth is established by verification using a proof. Intuitionistic logic is especially prominent in the field of constructive mathematics, which emphasizes the need to find or construct a specific example to prove its existence. Multi-valued logics depart from classicality by rejecting the principle of bivalence, which requires all propositions to be either true or false. For instance, Jan Łukasiewicz and Stephen Cole Kleene both proposed ternary logics which have a third truth value representing that a statement's truth value is indeterminate. These logics have been applied in the field of linguistics. Fuzzy logics are multivalued logics that have an infinite number of "degrees of truth", represented by a real number between 0 and 1. Paraconsistent logics are logical systems that can deal with contradictions. They are formulated to avoid the principle of explosion: for them, it is not the case that anything follows from a contradiction. They are often motivated by dialetheism, the view that contradictions are real or that reality itself is contradictory. Graham Priest is an influential contemporary proponent of this position and similar views have been ascribed to Georg Wilhelm Friedrich Hegel. Informal Informal logic is usually carried out in a less systematic way. It often focuses on more specific issues, like investigating a particular type of fallacy or studying a certain aspect of argumentation. Nonetheless, some frameworks of informal logic have also been presented that try to provide a systematic characterization of the correctness of arguments. The pragmatic or dialogical approach to informal logic sees arguments as speech acts and not merely as a set of premises together with a conclusion. As speech acts, they occur in a certain context, like a dialogue, which affects the standards of right and wrong arguments. A prominent version by Douglas N. Walton understands a dialogue as a game between two players. The initial position of each player is characterized by the propositions to which they are committed and the conclusion they intend to prove. Dialogues are games of persuasion: each player has the goal of convincing the opponent of their own conclusion. This is achieved by making arguments: arguments are the moves of the game. They affect to which propositions the players are committed. A winning move is a successful argument that takes the opponent's commitments as premises and shows how one's own conclusion follows from them. This is usually not possible straight away. For this reason, it is normally necessary to formulate a sequence of arguments as intermediary steps, each of which brings the opponent a little closer to one's intended conclusion. Besides these positive arguments leading one closer to victory, there are also negative arguments preventing the opponent's victory by denying their conclusion. Whether an argument is correct depends on whether it promotes the progress of the dialogue. Fallacies, on the other hand, are violations of the standards of proper argumentative rules. These standards also depend on the type of dialogue. For example, the standards governing the scientific discourse differ from the standards in business negotiations. The epistemic approach to informal logic, on the other hand, focuses on the epistemic role of arguments. It is based on the idea that arguments aim to increase our knowledge. They achieve this by linking justified beliefs to beliefs that are not yet justified. Correct arguments succeed at expanding knowledge while fallacies are epistemic failures: they do not justify the belief in their conclusion. For example, the fallacy of begging the question is a fallacy because it fails to provide independent justification for its conclusion, even though it is deductively valid. In this sense, logical normativity consists in epistemic success or rationality. The Bayesian approach is one example of an epistemic approach. Central to Bayesianism is not just whether the agent believes something but the degree to which they believe it, the so-called credence. Degrees of belief are seen as subjective probabilities in the believed proposition, i.e. how certain the agent is that the proposition is true. On this view, reasoning can be interpreted as a process of changing one's credences, often in reaction to new incoming information. Correct reasoning and the arguments it is based on follow the laws of probability, for example, the principle of conditionalization. Bad or irrational reasoning, on the other hand, violates these laws. Areas of research Logic is studied in various fields. In many cases, this is done by applying its formal method to specific topics outside its scope, like to ethics or computer science. In other cases, logic itself is made the subject of research in another discipline. This can happen in diverse ways. For instance, it can involve investigating the philosophical assumptions linked to the basic concepts used by logicians. Other ways include interpreting and analyzing logic through mathematical structures as well as studying and comparing abstract properties of formal logical systems. Philosophy of logic and philosophical logic Philosophy of logic is the philosophical discipline studying the scope and nature of logic. It examines many presuppositions implicit in logic, like how to define its basic concepts or the metaphysical assumptions associated with them. It is also concerned with how to classify logical systems and considers the ontological commitments they incur. Philosophical logic is one of the areas within the philosophy of logic. It studies the application of logical methods to philosophical problems in fields like metaphysics, ethics, and epistemology. This application usually happens in the form of extended or deviant logical systems. Metalogic Metalogic is the field of inquiry studying the properties of formal logical systems. For example, when a new formal system is developed, metalogicians may study it to determine which formulas can be proven in it. They may also study whether an algorithm could be developed to find a proof for each formula and whether every provable formula in it is a tautology. Finally, they may compare it to other logical systems to understand its distinctive features. A key issue in metalogic concerns the relation between syntax and semantics. The syntactic rules of a formal system determine how to deduce conclusions from premises, i.e. how to formulate proofs. The semantics of a formal system governs which sentences are true and which ones are false. This determines the validity of arguments since, for valid arguments, it is impossible for the premises to be true and the conclusion to be false. The relation between syntax and semantics concerns issues like whether every valid argument is provable and whether every provable argument is valid. Metalogicians also study whether logical systems are complete, sound, and consistent. They are interested in whether the systems are decidable and what expressive power they have. Metalogicians usually rely heavily on abstract mathematical reasoning when examining and formulating metalogical proofs. This way, they aim to arrive at precise and general conclusions on these topics. Mathematical logic The term "mathematical logic" is sometimes used as a synonym of "formal logic". But in a more restricted sense, it refers to the study of logic within mathematics. Major subareas include model theory, proof theory, set theory, and computability theory. Research in mathematical logic commonly addresses the mathematical properties of formal systems of logic. However, it can also include attempts to use logic to analyze mathematical reasoning or to establish logic-based foundations of mathematics. The latter was a major concern in early 20th-century mathematical logic, which pursued the program of logicism pioneered by philosopher-logicians such as Gottlob Frege, Alfred North Whitehead, and Bertrand Russell. Mathematical theories were supposed to be logical tautologies, and their program was to show this by means of a reduction of mathematics to logic. Many attempts to realize this program failed, from the crippling of Frege's project in his Grundgesetze by Russell's paradox, to the defeat of Hilbert's program by Gödel's incompleteness theorems. Set theory originated in the study of the infinite by Georg Cantor, and it has been the source of many of the most challenging and important issues in mathematical logic. They include Cantor's theorem, the status of the Axiom of Choice, the question of the independence of the continuum hypothesis, and the modern debate on large cardinal axioms. Computability theory is the branch of mathematical logic that studies effective procedures to solve calculation problems. One of its main goals is to understand whether it is possible to solve a given problem using an algorithm. For instance, given a certain claim about the positive integers, it examines whether an algorithm can be found to determine if this claim is true. Computability theory uses various theoretical tools and models, such as Turing machines, to explore this type of issue. Computational logic Computational logic is the branch of logic and computer science that studies how to implement mathematical reasoning and logical formalisms using computers. This includes, for example, automatic theorem provers, which employ rules of inference to construct a proof step by step from a set of premises to the intended conclusion without human intervention. Logic programming languages are designed specifically to express facts using logical formulas and to draw inferences from these facts. For example, Prolog is a logic programming language based on predicate logic. Computer scientists also apply concepts from logic to problems in computing. The works of Claude Shannon were influential in this regard. He showed how Boolean logic can be used to understand and implement computer circuits. This can be achieved using electronic logic gates, i.e. electronic circuits with one or more inputs and usually one output. The truth values of propositions are represented by voltage levels. In this way, logic functions can be simulated by applying the corresponding voltages to the inputs of the circuit and determining the value of the function by measuring the voltage of the output. Formal semantics of natural language Formal semantics is a subfield of logic, linguistics, and the philosophy of language. The discipline of semantics studies the meaning of language. Formal semantics uses formal tools from the fields of symbolic logic and mathematics to give precise theories of the meaning of natural language expressions. It understands meaning usually in relation to truth conditions, i.e. it examines in which situations a sentence would be true or false. One of its central methodological assumptions is the principle of compositionality. It states that the meaning of a complex expression is determined by the meanings of its parts and how they are combined. For example, the meaning of the verb phrase "walk and sing" depends on the meanings of the individual expressions "walk" and "sing". Many theories in formal semantics rely on model theory. This means that they employ set theory to construct a model and then interpret the meanings of expression in relation to the elements in this model. For example, the term "walk" may be interpreted as the set of all individuals in the model that share the property of walking. Early influential theorists in this field were Richard Montague and Barbara Partee, who focused their analysis on the English language. Epistemology of logic The epistemology of logic studies how one knows that an argument is valid or that a proposition is logically true. This includes questions like how to justify that modus ponens is a valid rule of inference or that contradictions are false. The traditionally dominant view is that this form of logical understanding belongs to knowledge a priori. In this regard, it is often argued that the mind has a special faculty to examine relations between pure ideas and that this faculty is also responsible for apprehending logical truths. A similar approach understands the rules of logic in terms of linguistic conventions. On this view, the laws of logic are trivial since they are true by definition: they just express the meanings of the logical vocabulary. Some theorists, like Hilary Putnam and Penelope Maddy, object to the view that logic is knowable a priori. They hold instead that logical truths depend on the empirical world. This is usually combined with the claim that the laws of logic express universal regularities found in the structural features of the world. According to this view, they may be explored by studying general patterns of the fundamental sciences. For example, it has been argued that certain insights of quantum mechanics refute the principle of distributivity in classical logic, which states that the formula is equivalent to . This claim can be used as an empirical argument for the thesis that quantum logic is the correct logical system and should replace classical logic. History Logic was developed independently in several cultures during antiquity. One major early contributor was Aristotle, who developed term logic in his Organon and Prior Analytics. He was responsible for the introduction of the hypothetical syllogism and temporal modal logic. Further innovations include inductive logic as well as the discussion of new logical concepts such as terms, predicables, syllogisms, and propositions. Aristotelian logic was highly regarded in classical and medieval times, both in Europe and the Middle East. It remained in wide use in the West until the early 19th century. It has now been superseded by later work, though many of its key insights are still present in modern systems of logic. Ibn Sina (Avicenna) was the founder of Avicennian logic, which replaced Aristotelian logic as the dominant system of logic in the Islamic world. It influenced Western medieval writers such as Albertus Magnus and William of Ockham. Ibn Sina wrote on the hypothetical syllogism and on the propositional calculus. He developed an original "temporally modalized" syllogistic theory, involving temporal logic and modal logic. He also made use of inductive logic, such as his methods of agreement, difference, and concomitant variation, which are critical to the scientific method. Fakhr al-Din al-Razi was another influential Muslim logician. He criticized Aristotelian syllogistics and formulated an early system of inductive logic, foreshadowing the system of inductive logic developed by John Stuart Mill. During the Middle Ages, many translations and interpretations of Aristotelian logic were made. The works of Boethius were particularly influential. Besides translating Aristotle's work into Latin, he also produced textbooks on logic. Later, the works of Islamic philosophers such as Ibn Sina and Ibn Rushd (Averroes) were drawn on. This expanded the range of ancient works available to medieval Christian scholars since more Greek work was available to Muslim scholars that had been preserved in Latin commentaries. In 1323, William of Ockham's influential Summa Logicae was released. It is a comprehensive treatise on logic that discusses many basic concepts of logic and provides a systematic exposition of types of propositions and their truth conditions. In Chinese philosophy, the School of Names and Mohism were particularly influential. The School of Names focused on the use of language and on paradoxes. For example, Gongsun Long proposed the white horse paradox, which defends the thesis that a white horse is not a horse. The school of Mohism also acknowledged the importance of language for logic and tried to relate the ideas in these fields to the realm of ethics. In India, the study of logic was primarily pursued by the schools of Nyaya, Buddhism, and Jainism. It was not treated as a separate academic discipline and discussions of its topics usually happened in the context of epistemology and theories of dialogue or argumentation. In Nyaya, inference is understood as a source of knowledge (pramāṇa). It follows the perception of an object and tries to arrive at conclusions, for example, about the cause of this object. A similar emphasis on the relation to epistemology is also found in Buddhist and Jainist schools of logic, where inference is used to expand the knowledge gained through other sources. Some of the later theories of Nyaya, belonging to the Navya-Nyāya school, resemble modern forms of logic, such as Gottlob Frege's distinction between sense and reference and his definition of number. The syllogistic logic developed by Aristotle predominated in the West until the mid-19th century, when interest in the foundations of mathematics stimulated the development of modern symbolic logic. Many see Gottlob Frege's Begriffsschrift as the birthplace of modern logic. Gottfried Wilhelm Leibniz's idea of a universal formal language is often considered a forerunner. Other pioneers were George Boole, who invented Boolean algebra as a mathematical system of logic, and Charles Peirce, who developed the logic of relatives. Alfred North Whitehead and Bertrand Russell, in turn, condensed many of these insights in their work Principia Mathematica. Modern logic introduced novel concepts, such as functions, quantifiers, and relational predicates. A hallmark of modern symbolic logic is its use of formal language to precisely codify its insights. In this regard, it departs from earlier logicians, who relied mainly on natural language. Of particular influence was the development of first-order logic, which is usually treated as the standard system of modern logic. Its analytical generality allowed the formalization of mathematics and drove the investigation of set theory. It also made Alfred Tarski's approach to model theory possible and provided the foundation of modern mathematical logic. See also References Notes Citations Bibliography External links Formal sciences
0.77074
0.999569
0.770408
Henderson–Hasselbalch equation
In chemistry and biochemistry, the Henderson–Hasselbalch equation relates the pH of a chemical solution of a weak acid to the numerical value of the acid dissociation constant, Ka, of acid and the ratio of the concentrations, of the acid and its conjugate base in an equilibrium. For example, the acid may be acetic acid The Henderson–Hasselbalch equation can be used to estimate the pH of a buffer solution by approximating the actual concentration ratio as the ratio of the analytical concentrations of the acid and of a salt, MA. The equation can also be applied to bases by specifying the protonated form of the base as the acid. For example, with an amine, Derivation, assumptions and limitations A simple buffer solution consists of a solution of an acid and a salt of the conjugate base of the acid. For example, the acid may be acetic acid and the salt may be sodium acetate. The Henderson–Hasselbalch equation relates the pH of a solution containing a mixture of the two components to the acid dissociation constant, Ka of the acid, and the concentrations of the species in solution. To derive the equation a number of simplifying assumptions have to be made. Assumption 1: The acid, HA, is monobasic and dissociates according to the equations CA is the analytical concentration of the acid and CH is the concentration the hydrogen ion that has been added to the solution. The self-dissociation of water is ignored. A quantity in square brackets, [X], represents the concentration of the chemical substance X. It is understood that the symbol H+ stands for the hydrated hydronium ion. Ka is an acid dissociation constant. The Henderson–Hasselbalch equation can be applied to a polybasic acid only if its consecutive pK values differ by at least 3. Phosphoric acid is such an acid. Assumption 2. The self-ionization of water can be ignored. This assumption is not, strictly speaking, valid with pH values close to 7, half the value of pKw, the constant for self-ionization of water. In this case the mass-balance equation for hydrogen should be extended to take account of the self-ionization of water. However, the term can be omitted to a good approximation. Assumption 3: The salt MA is completely dissociated in solution. For example, with sodium acetate the concentration of the sodium ion, [Na+] can be ignored. This is a good approximation for 1:1 electrolytes, but not for salts of ions that have a higher charge such as magnesium sulphate, MgSO4, that form ion pairs. Assumption 4: The quotient of activity coefficients, , is a constant under the experimental conditions covered by the calculations. The thermodynamic equilibrium constant, , is a product of a quotient of concentrations and a quotient, , of activity coefficients . In these expressions, the quantities in square brackets signify the concentration of the undissociated acid, HA, of the hydrogen ion H+, and of the anion A−; the quantities are the corresponding activity coefficients. If the quotient of activity coefficients can be assumed to be a constant which is independent of concentrations and pH, the dissociation constant, Ka can be expressed as a quotient of concentrations. Rearrangement of this expression and taking logarithms provides the Henderson–Hasselbalch equation Application to bases The equilibrium constant for the protonation of a base, B, + H+ is an association constant, Kb, which is simply related to the dissociation constant of the conjugate acid, BH+. The value of is ca. 14 at 25°C. This approximation can be used when the correct value is not known. Thus, the Henderson–Hasselbalch equation can be used, without modification, for bases. Biological applications With homeostasis the pH of a biological solution is maintained at a constant value by adjusting the position of the equilibria where is the bicarbonate ion and is carbonic acid. However, the solubility of carbonic acid in water may be exceeded. When this happens carbon dioxide gas is liberated and the following equation may be used instead. represents the carbon dioxide liberated as gas. In this equation, which is widely used in biochemistry, is a mixed equilibrium constant relating to both chemical and solubility equilibria. It can be expressed as where is the molar concentration of bicarbonate in the blood plasma and is the partial pressure of carbon dioxide in the supernatant gas. History In 1908, Lawrence Joseph Henderson derived an equation to calculate the hydrogen ion concentration of a bicarbonate buffer solution, which rearranged looks like this: In 1909 Søren Peter Lauritz Sørensen introduced the pH terminology, which allowed Karl Albert Hasselbalch to re-express Henderson's equation in logarithmic terms, resulting in the Henderson–Hasselbalch equation. See also Davenport diagram Gastric tonometry Further reading References Acid–base chemistry Eponymous equations of physics Equilibrium chemistry Mathematics in medicine
0.773645
0.995804
0.770399
Maillard reaction
The Maillard reaction ( ; ) is a chemical reaction between amino acids and reducing sugars to create melanoidins, the compounds that give browned food its distinctive flavor. Seared steaks, fried dumplings, cookies and other kinds of biscuits, breads, toasted marshmallows, falafel and many other foods undergo this reaction. It is named after French chemist Louis Camille Maillard, who first described it in 1912 while attempting to reproduce biological protein synthesis. The reaction is a form of non-enzymatic browning which typically proceeds rapidly from around . Many recipes call for an oven temperature high enough to ensure that a Maillard reaction occurs. At higher temperatures, caramelization (the browning of sugars, a distinct process) and subsequently pyrolysis (final breakdown leading to burning and the development of acrid flavors) become more pronounced. The reactive carbonyl group of the sugar reacts with the nucleophilic amino group of the amino acid and forms a complex mixture of poorly characterized molecules responsible for a range of aromas and flavors. This process is accelerated in an alkaline environment (e.g., lye applied to darken pretzels; see lye roll), as the amino groups are deprotonated, and hence have an increased nucleophilicity. This reaction is the basis for many of the flavoring industry's recipes. At high temperatures, a probable carcinogen called acrylamide can form. This can be discouraged by heating at a lower temperature, adding asparaginase, or injecting carbon dioxide. In the cooking process, Maillard reactions can produce hundreds of different flavor compounds depending on the chemical constituents in the food, the temperature, the cooking time, and the presence of air. These compounds, in turn, often break down to form yet more flavor compounds. Flavor scientists have used the Maillard reaction over the years to make artificial flavors, the majority of patents being related to the production of meat-like flavors. History In 1912, Louis Camille Maillard published a paper describing the reaction between amino acids and sugars at elevated temperatures. In 1953, chemist John E. Hodge with the U.S. Department of Agriculture established a mechanism for the Maillard reaction. Foods and products The Maillard reaction is responsible for many colors and flavors in foods, such as the browning of various meats when seared or grilled, the browning and umami taste in fried onions and coffee roasting. It contributes to the darkened crust of baked goods, the golden-brown color of French fries and other crisps, browning of malted barley as found in malt whiskey and beer, and the color and taste of dried and condensed milk, dulce de leche, toffee, black garlic, chocolate, toasted marshmallows, and roasted peanuts. 6-Acetyl-2,3,4,5-tetrahydropyridine is responsible for the biscuit or cracker-like flavor present in baked goods such as bread, popcorn, and tortilla products. The structurally related compound 2-acetyl-1-pyrroline has a similar smell and also occurs naturally without heating. The compound gives varieties of cooked rice and the herb pandan (Pandanus amaryllifolius) their typical smells. Both compounds have odor thresholds below 0.06 nanograms per liter. The browning reactions that occur when meat is roasted or seared are complex and occur mostly by Maillard browning with contributions from other chemical reactions, including the breakdown of the tetrapyrrole rings of the muscle protein myoglobin. Maillard reactions also occur in dried fruit and when champagne ages in the bottle Caramelization is an entirely different process from Maillard browning, though the results of the two processes are sometimes similar to the naked eye (and taste buds). Caramelization may sometimes cause browning in the same foods in which the Maillard reaction occurs, but the two processes are distinct. They are both promoted by heating, but the Maillard reaction involves amino acids, whereas caramelization is the pyrolysis of certain sugars. In making silage, excess heat causes the Maillard reaction to occur, which reduces the amount of energy and protein available to the animals that feed on it. Archaeology In archaeology, the Maillard process occurs when bodies are preserved in peat bogs. The acidic peat environment causes a tanning or browning of skin tones and can turn hair to a red or ginger tone. The chemical mechanism is the same as in the browning of food, but it develops slowly over time due to the acidic action on the bog body. It is typically seen on Iron Age bodies and was described by Painter in 1991 as the interaction of anaerobic, acidic, and cold (typically ) sphagnum acid on the polysaccharides. The Maillard reaction also contributes to the preservation of paleofeces. Chemical mechanism The carbonyl group of the sugar reacts with the amino group of the amino acid, producing N-substituted glycosylamine and water The unstable glycosylamine undergoes Amadori rearrangement, forming ketosamines Several ways are known for the ketosamines to react further: Produce two water molecules and reductones Diacetyl, pyruvaldehyde, and other short-chain hydrolytic fission products can be formed. Produce brown nitrogenous polymers and melanoidins The open-chain Amadori products undergo further dehydration and deamination to produce dicarbonyls. This is a crucial intermediate. Dicarbonyls react with amines to produce Strecker aldehydes through Strecker degradation. Acrylamide, a possible human carcinogen, can be generated as a byproduct of Maillard reaction between reducing sugars and amino acids, especially asparagine, both of which are present in most food products. See also Akabori amino-acid reaction Advanced glycation end-product Baking Caramelization References Further reading Van Soest, Peter J. (1982). Nutritional Ecology of the Ruminant (2nd ed.). Ithaca, NY: Cornell University Press. . . External links Food chemistry Name reactions
0.770974
0.99924
0.770388
Biological system
A biological system is a complex network which connects several biologically relevant entities. Biological organization spans several scales and are determined based different structures depending on what the system is. Examples of biological systems at the macro scale are populations of organisms. On the organ and tissue scale in mammals and other animals, examples include the circulatory system, the respiratory system, and the nervous system. On the micro to the nanoscopic scale, examples of biological systems are cells, organelles, macromolecular complexes and regulatory pathways. A biological system is not to be confused with a living system, such as a living organism. Organ and tissue systems These specific systems are widely studied in human anatomy and are also present in many other animals. Respiratory system: the organs used for breathing, the pharynx, larynx, bronchi, lungs and diaphragm. Digestive system: digestion and processing food with salivary glands, oesophagus, stomach, liver, gallbladder, pancreas, intestines, rectum and anus. Cardiovascular system (heart and circulatory system): pumping and channeling blood to and from the body and lungs with heart, blood and blood vessels. Urinary system: kidneys, ureters, bladder and urethra involved in fluid balance, electrolyte balance and excretion of urine. Integumentary system: skin, hair, fat, and nails. Skeletal system: structural support and protection with bones, cartilage, ligaments and tendons. Endocrine system: communication within the body using hormones made by endocrine glands such as the hypothalamus, pituitary gland, pineal body or pineal gland, thyroid, parathyroid and adrenals, i.e., adrenal glands. Exocrine system: various functions including lubrication and protection by exocrine glands such sweat glands, mucous glands, lacrimal glands and mammary glands Lymphatic system: structures involved in the transfer of lymph between tissues and the blood stream; includes the lymph and the nodes and vessels. The lymphatic system includes functions including immune responses and development of antibodies. Immune system: protects the organism from foreign bodies. Nervous system: collecting, transferring and processing information with brain, spinal cord, peripheral nervous system and sense organs. Sensory systems: visual system, auditory system, olfactory system, gustatory system, somatosensory system, vestibular system. Muscular system: allows for manipulation of the environment, provides locomotion, maintains posture, and produces heat. Includes skeletal muscles, smooth muscles and cardiac muscle. Reproductive system: the sex organs, such as ovaries, fallopian tubes, uterus, vagina, mammary glands, testes, vas deferens, seminal vesicles and prostate. History The notion of system (or apparatus) relies upon the concept of vital or organic function: a system is a set of organs with a definite function. This idea was already present in Antiquity (Galen, Aristotle), but the application of the term "system" is more recent. For example, the nervous system was named by Monro (1783), but Rufus of Ephesus (c. 90–120), clearly viewed for the first time the brain, spinal cord, and craniospinal nerves as an anatomical unit, although he wrote little about its function, nor gave a name to this unit. The enumeration of the principal functions - and consequently of the systems - remained almost the same since Antiquity, but the classification of them has been very various, e.g., compare Aristotle, Bichat, Cuvier. The notion of physiological division of labor, introduced in the 1820s by the French physiologist Henri Milne-Edwards, allowed to "compare and study living things as if they were machines created by the industry of man." Inspired in the work of Adam Smith, Milne-Edwards wrote that the "body of all living beings, whether animal or plant, resembles a factory ... where the organs, comparable to workers, work incessantly to produce the phenomena that constitute the life of the individual." In more differentiated organisms, the functional labor could be apportioned between different instruments or systems (called by him as appareils). Cellular organelle systems The exact components of a cell are determined by whether the cell is a eukaryote or prokaryote. Nucleus (eukaryotic only): storage of genetic material; control center of the cell. Cytosol: component of the cytoplasm consisting of jelly-like fluid in which organelles are suspended within Cell membrane (plasma membrane): Endoplasmic reticulum: outer part of the nuclear envelope forming a continuous channel used for transportation; consists of the rough endoplasmic reticulum and the smooth endoplasmic reticulum Rough endoplasmic reticulum (RER): considered "rough" due to the ribosomes attached to the channeling; made up of cisternae that allow for protein production Smooth endoplasmic reticulum (SER): storage and synthesis of lipids and steroid hormones as well as detoxification Ribosome: site of biological protein synthesis essential for internal activity and cannot be reproduced in other organs Mitochondrion (mitochondria): powerhouse of the cell; site of cellular respiration producing ATP (adenosine triphosphate) Lysosome: center of breakdown for unwanted/unneeded material within the cell Peroxisome: breaks down toxic materials from the contained digestive enzymes such as H2O2(hydrogen peroxide) Golgi apparatus (eukaryotic only): folded network involved in modification, transport, and secretion Chloroplast: site of photosynthesis; storage of chlorophyllyourmom.com.in.us.33.11.44.55.66.77.88.99.1010.1111.1212.1313.1414.1515.1616.1717.1818.1919.2020 See also Biological network Artificial life Biological systems engineering Evolutionary systems Organ system Systems biology Systems ecology Systems theory External links Systems Biology: An Overview by Mario Jardon: A review from the Science Creative Quarterly, 2005. Synthesis and Analysis of a Biological System, by Hiroyuki Kurata, 1999. It from bit and fit from bit. On the origin and impact of information in the average evolution. Includes how life forms and biological systems originate and from there evolve to become more and more complex, including evolution of genes and memes, into the complex memetics from organisations and multinational corporations and a "global brain", (Yves Decadt, 2000). Book published in Dutch with English paper summary in The Information Philosopher, http://www.informationphilosopher.com/solutions/scientists/decadt/ Schmidt-Rhaesa, A. 2007. The Evolution of Organ Systems. Oxford University Press, Oxford, . References Biological systems
0.775318
0.993594
0.770351
Computational model
A computational model uses computer programs to simulate and study complex systems using an algorithmic or mechanistic approach and is widely used in a diverse range of fields spanning from physics, engineering, chemistry and biology to economics, psychology, cognitive science and computer science. The system under study is often a complex nonlinear system for which simple, intuitive analytical solutions are not readily available. Rather than deriving a mathematical analytical solution to the problem, experimentation with the model is done by adjusting the parameters of the system in the computer, and studying the differences in the outcome of the experiments. Operation theories of the model can be derived/deduced from these computational experiments. Examples of common computational models are weather forecasting models, earth simulator models, flight simulator models, molecular protein folding models, Computational Engineering Models (CEM), and neural network models. See also Computational Engineering Computational cognition Reversible computing Agent-based model Artificial neural network Computational linguistics Computational human modeling Data-driven model Decision field theory Dynamical systems model of cognition Membrane computing Ontology (information science) Programming language theory Microscale and macroscale models References Models of computation Mathematical modeling
0.78687
0.978987
0.770336
Radical (chemistry)
In chemistry, a radical, also known as a free radical, is an atom, molecule, or ion that has at least one unpaired valence electron. With some exceptions, these unpaired electrons make radicals highly chemically reactive. Many radicals spontaneously dimerize. Most organic radicals have short lifetimes. A notable example of a radical is the hydroxyl radical (HO·), a molecule that has one unpaired electron on the oxygen atom. Two other examples are triplet oxygen and triplet carbene (꞉) which have two unpaired electrons. Radicals may be generated in a number of ways, but typical methods involve redox reactions, ionizing radiation, heat, electrical discharges, and electrolysis are known to produce radicals. Radicals are intermediates in many chemical reactions, more so than is apparent from the balanced equations. Radicals are important in combustion, atmospheric chemistry, polymerization, plasma chemistry, biochemistry, and many other chemical processes. A majority of natural products are generated by radical-generating enzymes. In living organisms, the radicals superoxide and nitric oxide and their reaction products regulate many processes, such as control of vascular tone and thus blood pressure. They also play a key role in the intermediary metabolism of various biological compounds. Such radicals can even be messengers in a process dubbed redox signaling. A radical may be trapped within a solvent cage or be otherwise bound. Formation Radicals are either (1) formed from spin-paired molecules or (2) from other radicals. Radicals are formed from spin-paired molecules through homolysis of weak bonds or electron transfer, also known as reduction. Radicals are formed from other radicals through substitution, addition, and elimination reactions. Radical formation from spin-paired molecules Homolysis Homolysis makes two new radicals from a spin-paired molecule by breaking a covalent bond, leaving each of the fragments with one of the electrons in the bond. Because breaking a chemical bond requires energy, homolysis occurs under the addition of heat or light. The bond dissociation energy associated with homolysis depends on the stability of a given compound, and some weak bonds are able to homolyze at relatively lower temperatures. Some homolysis reactions are particularly important because they serve as an initiator for other radical reactions. One such example is the homolysis of halogens, which occurs under light and serves as the driving force for radical halogenation reactions. Another notable reaction is the homolysis of dibenzoyl peroxide, which results in the formation of two benzoyloxy radicals and acts as an initiator for many radical reactions. Reduction Classically radicals form by one-electron reductions. Typically one-electron reduced organic compounds are unstable. Stability is conferred to the radical anion when the charge can be delocalized. Examples include alkali metal naphthenides, anthracenides, and ketyls. Radical formation from other radicals Abstraction Hydrogen abstraction generates radicals. To achieve this reaction, the C-H bond of the H-atom donor must be weak, which is rarely the case in organic compounds. Allylic and especiall doubly allylic C-H bonds are prone to abstraction by O2. This reaction is the basis of drying oils, such as linoleic acid derivatives. Addition In free-radical additions, a radical adds to a spin-paired substrate. When applied to organic compounds, the reaction usually entails addition to an alkene. This addition generates a new radical, which can add to yet another alkene, etc. This behavior underpins radical polymerization, technology that produces many plastics. Elimination Radical elimination can be viewed as the reverse of radical addition. In radical elimination, an unstable radical compound breaks down into a spin-paired molecule and a new radical compound. Shown below is an example of a radical elimination reaction, where a benzoyloxy radical breaks down into a phenyl radical and a carbon dioxide molecule. Stability Stability of organic radicals The generation and reactivity of organic radicals are dependent on both their thermodynamic stability and kinetic stability, also known as the persistency. This distinction is necessary because these two types of stability do not always correlate with each other. For example, benzylic radicals, which are known for their weak benzylic C−H bond strength, are thermodynamically stabilized due to resonance delocalization. However, these radicals are kinetically transient because they can undergo rapid, diffusion-limited dimerization, resulting in a lifetime that is less than a few nanoseconds. To avoid confusion, particularly for carbon-centered radicals, Griller and Ingold introduced the following definitions: "Stabilized should be used to describe a carbon-centered radical, R·, when the R−H bond strength is weaker than the appropriate C−H bond of alkane." "Persistent should be used to describe a radical that has a lifetime that is significantly greater than methyl [radical] under the same condition." While relationships between thermodynamic stability and kinetic persistency is highly case-dependent, organic radicals can be generally stabilized by any or all of these factors: the presence of electronegativity, delocalization, and steric hindrance. The compound 2,2,6,6-tetramethylpiperidinyloxyl illustrates the combination of all three factors. It is a commercially available solid that, aside from being magnetic, behaves like a normal organic compound. Electronegativity Organic radicals are inherently electron deficient thus the greater the electronegativity of the atom on which the unpaired electron resides the less stable the radical. Between carbon, nitrogen, and oxygen, for example, carbon is the most stable and oxygen the least stable. Electronegativity also factors into the stability of carbon atoms of different hybridizations. Greater s-character correlates to higher electronegativity of the carbon atom (due to the close proximity of s orbitals to the nucleus), and the greater the electronegativity the less stable a radical. sp-hybridized carbons (50% s-character) form the least stable radicals compared to sp3-hybridized carbons (25% s-character) which form the most stable radicals. Delocalization The delocalization of electrons across the structure of a radical, also known as its ability to form one or more resonance structures, allows for the electron deficiency to be spread over several atoms, minimizing instability. Delocalization usually occurs in the presence of electron-donating groups, such as hydroxyl groups (−OH), ethers (−OR), adjacent alkenes, and amines (−NH2 or −NR), or electron-withdrawing groups, such as C=O or C≡N. Delocalization effects can also be understood using molecular orbital theory as a lens, more specifically, by examining the intramolecular interaction of the unpaired electron with a donating group's pair of electrons or the empty π* orbital of an electron-withdrawing group in the form of a molecular orbital diagram. The HOMO of a radical is singly-occupied hence the orbital is aptly referred to as the SOMO, or the Singly-Occupied Molecular Orbital. For an electron-donating group, the SOMO interacts with the lower energy lone pair to form a new lower-energy filled bonding-orbital and a singly-filled new SOMO, higher in energy than the original. While the energy of the unpaired electron has increased, the decrease in energy of the lone pair forming the new bonding orbital outweighs the increase in energy of the new SOMO, resulting in a net decrease of the energy of the molecule. Therefore, electron-donating groups help stabilize radicals. With a group that is instead electron-withdrawing, the SOMO then interacts with the empty π* orbital. There are no electrons occupying the higher energy orbital formed, while a new SOMO forms that is lower in energy. This results in a lower energy and higher stability of the radical species. Both donating groups and withdrawing groups stabilize radicals. Another well-known albeit weaker form of delocalization is hyperconjugation. In radical chemistry, radicals are stabilized by hyperconjugation with adjacent alkyl groups. The donation of sigma (σ) C−H bonds into the partially empty radical orbitals helps to differentiate the stabilities of radicals on tertiary, secondary, and primary carbons. Tertiary carbon radicals have three σ C-H bonds that donate, secondary radicals only two, and primary radicals only one. Therefore, tertiary radicals are the most stable and primary radicals the least stable. Steric hindrance Most simply, the greater the steric hindrance the more difficult it is for reactions to take place, and the radical form is favored by default. For example, compare the hydrogen-abstracted form of N-hydroxypiperidine to the molecule TEMPO. TEMPO, or (2,2,6,6-Tetramethylpiperidin-1-yl)oxyl, is too sterically hindered by the additional methyl groups to react making it stable enough to be sold commercially in its radical form. N-Hydroxypiperidine, however, does not have the four methyl groups to impede the way of a reacting molecule so the structure is unstable. Facile H-atom donors The stability of many (or most) organic radicals is not indicated by their isolability but is manifested in their ability to function as donors of H•. This property reflects a weakened bond to hydrogen, usually O−H but sometimes N−H or C−H. This behavior is important because these H• donors serve as antioxidants in biology and in commerce. Illustrative is α-tocopherol (vitamin E). The tocopherol radical itself is insufficiently stable for isolation, but the parent molecule is a highly effective hydrogen-atom donor. The C−H bond is weakened in triphenylmethyl (trityl) derivatives. Inorganic radicals A large variety of inorganic radicals are stable and in fact isolable. Examples include most first-row transition metal complexes. With regard to main group radicals, the most abundant radical in the universe is also the most abundant chemical in the universe, H•. Most main group radicals are not however isolable, despite their intrinsic stability. Hydrogen radicals for example combine eagerly to form H2. Nitric oxide (NO) is well known example of an isolable inorganic radical. Fremy's salt (Potassium nitrosodisulfonate, (KSO3)2NO) is a related example. Many thiazyl radicals are known, despite limited extent of π resonance stabilization. Many radicals can be envisioned as the products of breaking of covalent bonds by homolysis. The homolytic bond dissociation energies, usually abbreviated as "ΔH°" are a measure of bond strength. Splitting H2 into 2 H•, for example, requires a ΔH° of +435 kJ/mol, while splitting Cl2 into two Cl• requires a ΔH° of +243 kJ/mol. For weak bonds, homolysis can be induced thermally. Strong bonds require high energy photons or even flames to induce homolysis. Diradicals Diradicals are molecules containing two radical centers. Dioxygen (O2) is an important example of a stable diradical. Singlet oxygen, the lowest-energy non-radical state of dioxygen, is less stable than the diradical due to Hund's rule of maximum multiplicity. The relative stability of the oxygen diradical is primarily due to the spin-forbidden nature of the triplet-singlet transition required for it to grab electrons, i.e., "oxidize". The diradical state of oxygen also results in its paramagnetic character, which is demonstrated by its attraction to an external magnet. Diradicals can also occur in metal-oxo complexes, lending themselves for studies of spin forbidden reactions in transition metal chemistry. Carbenes in their triplet state can be viewed as diradicals centred on the same atom, while these are usually highly reactive persistent carbenes are known, with N-heterocyclic carbenes being the most common example. Triplet carbenes and nitrenes are diradicals. Their chemical properties are distinct from the properties of their singlet analogues. Occurrence of radicals Combustion A familiar radical reaction is combustion. The oxygen molecule is a stable diradical, best represented by •O–O•. Because spins of the electrons are parallel, this molecule is stable. While the ground state of oxygen is this unreactive spin-unpaired (triplet) diradical, an extremely reactive spin-paired (singlet) state is available. For combustion to occur, the energy barrier between these must be overcome. This barrier can be overcome by heat, requiring high temperatures. The triplet-singlet transition is also "forbidden". This presents an additional barrier to the reaction. It also means molecular oxygen is relatively unreactive at room temperature except in the presence of a catalytic heavy atom such as iron or copper. Combustion consists of various radical chain reactions that the singlet radical can initiate. The flammability of a given material strongly depends on the concentration of radicals that must be obtained before initiation and propagation reactions dominate leading to combustion of the material. Once the combustible material has been consumed, termination reactions again dominate and the flame dies out. As indicated, promotion of propagation or termination reactions alters flammability. For example, because lead itself deactivates radicals in the gasoline-air mixture, tetraethyl lead was once commonly added to gasoline. This prevents the combustion from initiating in an uncontrolled manner or in unburnt residues (engine knocking) or premature ignition (preignition). When a hydrocarbon is burned, a large number of different oxygen radicals are involved. Initially, hydroperoxyl radical (HOO•) are formed. These then react further to give organic hydroperoxides that break up into hydroxyl radicals (HO•). Polymerization Many polymerization reactions are initiated by radicals. Polymerization involves an initial radical adding to non-radical (usually an alkene) to give new radicals. This process is the basis of the radical chain reaction. The art of polymerization entails the method by which the initiating radical is introduced. For example, methyl methacrylate (MMA) can be polymerized to produce Poly(methyl methacrylate) (PMMA – Plexiglas or Perspex) via a repeating series of radical addition steps: Newer radical polymerization methods are known as living radical polymerization. Variants include reversible addition-fragmentation chain transfer (RAFT) and atom transfer radical polymerization (ATRP). Being a prevalent radical, O2 reacts with many organic compounds to generate radicals together with the hydroperoxide radical. Drying oils and alkyd paints harden due to radical crosslinking initiated by oxygen from the atmosphere. Atmospheric radicals The most common radical in the lower atmosphere is molecular dioxygen. Photodissociation of source molecules produces other radicals. In the lower atmosphere, important radical are produced by the photodissociation of nitrogen dioxide to an oxygen atom and nitric oxide (see below), which plays a key role in smog formation—and the photodissociation of ozone to give the excited oxygen atom O(1D) (see below). The net and return reactions are also shown ( and , respectively). In the upper atmosphere, the photodissociation of normally unreactive chlorofluorocarbons (CFCs) by solar ultraviolet radiation is an important source of radicals (see eq. 1 below). These reactions give the chlorine radical, Cl•, which catalyzes the conversion of ozone to O2, thus facilitating ozone depletion (– below). Such reactions cause the depletion of the ozone layer, especially since the chlorine radical is free to engage in another reaction chain; consequently, the use of chlorofluorocarbons as refrigerants has been restricted. In biology Radicals play important roles in biology. Many of these are necessary for life, such as the intracellular killing of bacteria by phagocytic cells such as granulocytes and macrophages. Radicals are involved in cell signalling processes, known as redox signaling. For example, radical attack of linoleic acid produces a series of 13-hydroxyoctadecadienoic acids and 9-hydroxyoctadecadienoic acids, which may act to regulate localized tissue inflammatory and/or healing responses, pain perception, and the proliferation of malignant cells. Radical attacks on arachidonic acid and docosahexaenoic acid produce a similar but broader array of signaling products. Radicals may also be involved in Parkinson's disease, senile and drug-induced deafness, schizophrenia, and Alzheimer's. The classic free-radical syndrome, the iron-storage disease hemochromatosis, is typically associated with a constellation of free-radical-related symptoms including movement disorder, psychosis, skin pigmentary melanin abnormalities, deafness, arthritis, and diabetes mellitus. The free-radical theory of aging proposes that radicals underlie the aging process itself. Similarly, the process of mitohormesis suggests that repeated exposure to radicals may extend life span. Because radicals are necessary for life, the body has a number of mechanisms to minimize radical-induced damage and to repair damage that occurs, such as the enzymes superoxide dismutase, catalase, glutathione peroxidase and glutathione reductase. In addition, antioxidants play a key role in these defense mechanisms. These are often the three vitamins, vitamin A, vitamin C and vitamin E and polyphenol antioxidants. Furthermore, there is good evidence indicating that bilirubin and uric acid can act as antioxidants to help neutralize certain radicals. Bilirubin comes from the breakdown of red blood cells' contents, while uric acid is a breakdown product of purines. Too much bilirubin, though, can lead to jaundice, which could eventually damage the central nervous system, while too much uric acid causes gout. Reactive oxygen species Reactive oxygen species or ROS are species such as superoxide, hydrogen peroxide, and hydroxyl radical, commonly associated with cell damage. ROS form as a natural by-product of the normal metabolism of oxygen and have important roles in cell signaling. Two important oxygen-centered radicals are superoxide and hydroxyl radical. They derive from molecular oxygen under reducing conditions. However, because of their reactivity, these same radicals can participate in unwanted side reactions resulting in cell damage. Excessive amounts of these radicals can lead to cell injury and death, which may contribute to many diseases such as cancer, stroke, myocardial infarction, diabetes and major disorders. Many forms of cancer are thought to be the result of reactions between radicals and DNA, potentially resulting in mutations that can adversely affect the cell cycle and potentially lead to malignancy. Some of the symptoms of aging such as atherosclerosis are also attributed to radical induced oxidation of cholesterol to 7-ketocholesterol. In addition radicals contribute to alcohol-induced liver damage, perhaps more than alcohol itself. Radicals produced by cigarette smoke are implicated in inactivation of alpha 1-antitrypsin in the lung. This process promotes the development of emphysema. Oxybenzone has been found to form radicals in sunlight, and therefore may be associated with cell damage as well. This only occurred when it was combined with other ingredients commonly found in sunscreens, like titanium oxide and octyl methoxycinnamate. ROS attack the polyunsaturated fatty acid, linoleic acid, to form a series of 13-hydroxyoctadecadienoic acid and 9-hydroxyoctadecadienoic acid products that serve as signaling molecules that may trigger responses that counter the tissue injury which caused their formation. ROS attacks other polyunsaturated fatty acids, e.g. arachidonic acid and docosahexaenoic acid, to produce a similar series of signaling products. Reactive oxygen species are also used in controlled reactions involving singlet dioxygen known as type II photooxygenation reactions after Dexter energy transfer (triplet-triplet annihilation) from natural triplet dioxygen and triplet excited state of a photosensitizer. Typical chemical transformations with this singlet dioxygen species involve, among others, conversion of cellulosic biowaste into new poylmethine dyes. Depiction in chemical reactions In chemical equations, radicals are frequently denoted by a dot placed immediately to the right of the atomic symbol or molecular formula as follows: Radical reaction mechanisms use single-headed arrows to depict the movement of single electrons: The homolytic cleavage of the breaking bond is drawn with a "fish-hook" arrow to distinguish from the usual movement of two electrons depicted by a standard curly arrow. The second electron of the breaking bond also moves to pair up with the attacking radical electron. Radicals also take part in radical addition and radical substitution as reactive intermediates. Chain reactions involving radicals can usually be divided into three distinct processes. These are initiation, propagation, and termination. Initiation reactions are those that result in a net increase in the number of radicals. They may involve the formation of radicals from stable species as in Reaction 1 above or they may involve reactions of radicals with stable species to form more radicals. Propagation reactions are those reactions involving radicals in which the total number of radicals remains the same. Termination reactions are those reactions resulting in a net decrease in the number of radicals. Typically two radicals combine to form a more stable species, for example: 2 Cl• → Cl2 History and nomenclature Until late in the 20th century the word "radical" was used in chemistry to indicate any connected group of atoms, such as a methyl group or a carboxyl, whether it was part of a larger molecule or a molecule on its own. A radical is often known as an R group. The qualifier "free" was then needed to specify the unbound case. Following recent nomenclature revisions, a part of a larger molecule is now called a functional group or substituent, and "radical" now implies "free". However, the old nomenclature may still appear in some books. The term radical was already in use when the now obsolete radical theory was developed. Louis-Bernard Guyton de Morveau introduced the phrase "radical" in 1785 and the phrase was employed by Antoine Lavoisier in 1789 in his Traité Élémentaire de Chimie. A radical was then identified as the root base of certain acids (the Latin word "radix" meaning "root"). Historically, the term radical in radical theory was also used for bound parts of the molecule, especially when they remain unchanged in reactions. These are now called functional groups. For example, methyl alcohol was described as consisting of a methyl "radical" and a hydroxyl "radical". Neither are radicals in the modern chemical sense, as they are permanently bound to each other, and have no unpaired, reactive electrons; however, they can be observed as radicals in mass spectrometry when broken apart by irradiation with energetic electrons. In a modern context the first organic (carbon–containing) radical identified was the triphenylmethyl radical, (C6H5)3C•. This species was discovered by Moses Gomberg in 1900. In 1933 Morris S. Kharasch and Frank Mayo proposed that free radicals were responsible for anti-Markovnikov addition of hydrogen bromide to allyl bromide. In most fields of chemistry, the historical definition of radicals contends that the molecules have nonzero electron spin. However, in fields including spectroscopy and astrochemistry, the definition is slightly different. Gerhard Herzberg, who won the Nobel prize for his research into the electron structure and geometry of radicals, suggested a looser definition of free radicals: "any transient (chemically unstable) species (atom, molecule, or ion)". The main point of his suggestion is that there are many chemically unstable molecules that have zero spin, such as C2, C3, CH2 and so on. This definition is more convenient for discussions of transient chemical processes and astrochemistry; therefore researchers in these fields prefer to use this loose definition. See also Electron pair Globally Harmonized System of Classification and Labelling of Chemicals Hofmann–Löffler reaction Free radical research ARC Centre of Excellence for Free Radical Chemistry and Biotechnology References Articles containing video clips Biological processes Biomolecules Chemical bonding Environmental chemistry Senescence
0.771795
0.998107
0.770333
Morphological analysis (problem-solving)
Morphological analysis or general morphological analysis is a method for exploring possible solutions to a multi-dimensional, non-quantified complex problem. It was developed by Swiss astronomer Fritz Zwicky. General morphology has found use in fields including engineering design, technological forecasting, organizational development and policy analysis. Overview General morphology was developed by Fritz Zwicky, the Bulgarian-born, Swiss-national astrophysicist based at the California Institute of Technology. Among others, Zwicky applied morphological analysis to astronomical studies and jet and rocket propulsion systems. As a problem-structuring and problem-solving technique, morphological analysis was designed for multi-dimensional, non-quantifiable problems where causal modelling and simulation do not function well, or at all. Zwicky developed this approach to address seemingly non-reducible complexity: using the technique of cross-consistency assessment (CCA), the system allows for reduction by identifying the possible solutions that actually exist, eliminating the illogical solution combinations in a grid box rather than reducing the number of variables involved. Decomposition versus morphological analysis Problems that involve many governing factors, where most of them cannot be expressed numerically can be well suited for morphological analysis. The conventional approach is to break a complex system into parts, isolate the parts (dropping the 'trivial' elements) whose contributions are critical to the output and solve the simplified system for desired scenarios. The disadvantage of this method is that many real-world phenomena do not have obviously trivial elements and cannot be simplified. Morphological analysis works backwards from the output towards the system internals without a simplification step. The system's interactions are fully accounted for in the analysis. References Further reading Duczynski, Guy; dov Bachmann, Sascha; Smith, Matthew; Knight, Charles (August 2023). "Operational and Strategic Progress in Ukraine: Identifying the Condition Changes". Naval Post-Graduate School, Insights, Monterrey. available at: https://nps.edu/web/ecco/global-ecco-insights See also Corporate strategy Futures studies Influence diagrams Market research Morphological box Scenario analysis Scenario planning Socio-technical systems Stakeholder analysis Strategic planning TRIZ Wicked problem Morphology Problem solving methods
0.786499
0.979438
0.770327
Anagenesis
Anagenesis is the gradual evolution of a species that continues to exist as an interbreeding population. This contrasts with cladogenesis, which occurs when there is branching or splitting, leading to two or more lineages and resulting in separate species. Anagenesis does not always lead to the formation of a new species from an ancestral species. When speciation does occur as different lineages branch off and cease to interbreed, a core group may continue to be defined as the original species. The evolution of this group, without extinction or species selection, is anagenesis. Hypotheses One hypothesis is that during the speciation event in anagenetic evolution, the original populations will increase quickly, and then rack up genetic variation over long periods of time by mutation and recombination in a stable environment. Other factors such as selection or genetic drift will have such a significant effect on genetic material and physical traits that a species can be acknowledged as being different from the previous. Development An alternative definition offered for anagenesis involves progeny relationships between designated taxa with one or more denominated taxa in line with a branch from the evolutionary tree. Taxa must be within the species or genus and will help identify possible ancestors. When looking at evolutionary descent, there are two mechanisms at play. The first process is when genetic information changes. This means that over time there is enough of a difference in their genomes, and in the way that species' genes interact with each other during the developmental stage, that anagenesis can thereby be viewed as the processes of sexual and natural selection, and genetic drift's effect on an evolving species over time. The second process, speciation, is closely associated with cladogenesis. Speciation includes the actual separation of lineages, into two or more new species, from one specified species of origin. Cladogenesis can be seen as a similar hypothesis to anagenesis, with the addition of speciation to its mechanisms. Diversity on a species-level is able to be achieved through anagenesis. Anagenesis suggests that evolutionary changes can occur in a species over time to a sufficient degree that later organisms may be considered a different species, especially in the absence of fossils documenting the gradual transition from one to another. This is in contrast to cladogenesis—or speciation in a sense—in which a population is split into two or more reproductively isolated groups and these groups accumulate sufficient differences to become distinct species. The punctuated equilibria hypothesis suggests that anagenesis is rare and that the rate of evolution is most rapid immediately after a split which will lead to cladogenesis, but does not completely rule out anagenesis. Distinguishing between anagenesis and cladogenesis is particularly relevant in the fossil record, where limited fossil preservation in time and space makes it difficult to distinguish between anagenesis, cladogenesis where one species replaces the other, or simple migration patterns. Recent evolutionary studies are looking at anagenesis and cladogenesis for possible answers in developing the hominin phylogenetic tree to understand morphological diversity and the origins of Australopithecus anamensis, and this case could possibly show anagenesis in the fossil record. When enough mutations have occurred and become stable in a population so that it is significantly differentiated from an ancestral population, a new species name may be assigned. A series of such species is collectively known as an evolutionary lineage. The various species along an evolutionary lineage are chronospecies. If the ancestral population of a chronospecies does not go extinct, then this is cladogenesis, and the ancestral population represents a paraphyletic species or paraspecies, being an evolutionary grade. In humans The modern human origins debate caused researchers to look further for answers. Researchers were curious to know if present day humans originated from Africa, or if they somehow, through anagenesis, were able to evolve from a single archaic species that lived in Afro-Eurasia. Milford H. Wolpoff is a paleoanthropologist whose work, studying human fossil records, explored anagenesis as a hypothesis for hominin evolution. When looking at anagenesis in hominids, M. H. Wolpoff describes in terms of the 'single-species hypothesis,' which is characterized by thinking of the impact that culture has on a species, as an adaptive system, and as an explanation for the conditions humans tend to live in, based on the environmental conditions, or the ecological niche. When judging the effect that culture has as an Adaptive System, scientists must first look at modern Homo Sapiens. Wolpoff contended that the ecological niche of past, extinct hominidae is distinct within the line of origin. Examining early Pliocene and late Miocenes findings helps to determine the corresponding importance of Anagenesis vs. Cladogenesis during the period of morphological differences. These findings propose that branches of the human and chimpanzee once diverged from each other. The hominin fossils go as far as 5 to 7 million years ago (Mya). Diversity on a species-level is able to be achieved through anagenesis. With collected data, only one or two early hominin were found to be relatively close to the Plio-Pleistocene range. Once more research was done, specifically with the fossils of A. anamensis and A. afarensis, researchers were able to justify that these two hominin species were linked ancestrally. However, looking at data collected by William H. Kimbel and other researchers, they viewed the history of early hominin fossils and concluded that actual macroevolution change via anagenesis was scarce. Phylogeny DEM (or Dynamic Evolutionary Map) is a different way to track ancestors and relationships between organisms. The pattern of branching in phylogenetic trees and how far the branch grows after a species lineage has split and evolved, correlates with anagenesis and cladogenesis. However, in DEM dots depict the movement of these different species. Anagenesis is viewed by observing the dot movement across the DEM, whereas cladogenesis is viewed by observing the separation and movement of the dots across the map. Criticism Controversy arises among taxonomists as to when the differences are significant enough to warrant a new species classification: Anagenesis may also be referred to as gradual evolution. The distinction of speciation and lineage evolution as anagenesis or cladogenesis can be controversial, and some academics question the necessity of the terms altogether. The philosopher of science Marc Ereshefsky argues that paraphyletic taxa are the result of anagenesis. The lineage leading to birds has diverged significantly from lizards and crocodiles, allowing evolutionary taxonomists to classify birds separately from lizards and crocodiles, which are grouped as reptiles. Applications Regarding social evolution, it has been suggested that social anagenesis/aromorphosis be viewed as universal or widely diffused social innovation that raises social systems' complexity, adaptability, integrity, and interconnectedness. See also Multigenomic organism References External links Diagram contrasting Anagenesis and Cladogenesis from the University of Newfoundland Evolutionary biology concepts Evolutionary biology terminology Rate of evolution Speciation
0.787923
0.977668
0.770327
Hill equation (biochemistry)
In biochemistry and pharmacology, the Hill equation refers to two closely related equations that reflect the binding of ligands to macromolecules, as a function of the ligand concentration. A ligand is "a substance that forms a complex with a biomolecule to serve a biological purpose" (ligand definition), and a macromolecule is a very large molecule, such as a protein, with a complex structure of components (macromolecule definition). Protein-ligand binding typically changes the structure of the target protein, thereby changing its function in a cell. The distinction between the two Hill equations is whether they measure occupancy or response. The Hill equation reflects the occupancy of macromolecules: the fraction that is saturated or bound by the ligand. This equation is formally equivalent to the Langmuir isotherm. Conversely, the Hill equation proper reflects the cellular or tissue response to the ligand: the physiological output of the system, such as muscle contraction. The Hill equation was originally formulated by Archibald Hill in 1910 to describe the sigmoidal O2 binding curve of haemoglobin. The binding of a ligand to a macromolecule is often enhanced if there are already other ligands present on the same macromolecule (this is known as cooperative binding). The Hill equation is useful for determining the degree of cooperativity of the ligand(s) binding to the enzyme or receptor. The Hill coefficient provides a way to quantify the degree of interaction between ligand binding sites. The Hill equation (for response) is important in the construction of dose-response curves. Proportion of ligand-bound receptors The Hill equation is commonly expressed in the following ways. , where: is the fraction of the receptor protein concentration that is bound by the ligand, [L]is the total ligand concentration, is the apparent dissociation constant derived from the law of mass action, is the ligand concentration producing half occupation, is the Hill coefficient. The special case where is a Monod equation. Constants In pharmacology, is often written as , where A is the ligand, equivalent to L, and R is the receptor. can be expressed in terms of the total amount of receptor and ligand-bound receptor concentrations: . is equal to the ratio of the dissociation rate of the ligand-receptor complex to its association rate. Kd is the equilibrium constant for dissociation. is defined so that , this is also known as the microscopic dissociation constant and is the ligand concentration occupying half of the binding sites. In recent literature, this constant is sometimes referred to as . Gaddum equation The Gaddum equation is a further generalisation of the Hill-equation, incorporating the presence of a reversible competitive antagonist. The Gaddum equation is derived similarly to the Hill-equation but with 2 equilibria: both the ligand with the receptor and the antagonist with the receptor. Hence, the Gaddum equation has 2 constants: the equilibrium constants of the ligand and that of the antagonist Hill plot The Hill plot is the rearrangement of the Hill equation into a straight line. Taking the reciprocal of both sides of the Hill equation, rearranging, and inverting again yields: . Taking the logarithm of both sides of the equation leads to an alternative formulation of the Hill-Langmuir equation: . This last form of the Hill equation is advantageous because a plot of versus yields a linear plot, which is called a Hill plot. Because the slope of a Hill plot is equal to the Hill coefficient for the biochemical interaction, the slope is denoted by . A slope greater than one thus indicates positively cooperative binding between the receptor and the ligand, while a slope less than one indicates negatively cooperative binding. Transformations of equations into linear forms such as this were very useful before the widespread use of computers, as they allowed researchers to determine parameters by fitting lines to data. However, these transformations affect error propagation, and this may result in undue weight to error in data points near 0 or 1. This impacts the parameters of linear regression lines fitted to the data. Furthermore, the use of computers enables more robust analysis involving nonlinear regression. Tissue response A distinction should be made between quantification of drugs binding to receptors and drugs producing responses. There may not necessarily be a linear relationship between the two values. In contrast to this article's previous definition of the Hill equation, the IUPHAR defines the Hill equation in terms of the tissue response , as where [A] is the drug concentration, is the Hill coefficient, and is the drug concentration that produces a 50% maximal response. Dissociation constants (in the previous section) relate to ligand binding, while reflects tissue response. This form of the equation can reflect tissue/cell/population responses to drugs and can be used to generate dose response curves. The relationship between and EC50 may be quite complex as a biological response will be the sum of myriad factors; a drug will have a different biological effect if more receptors are present, regardless of its affinity. The Del-Castillo Katz model is used to relate the Hill equation to receptor activation by including a second equilibrium of the ligand-bound receptor to an activated form of the ligand-bound receptor. Statistical analysis of response as a function of stimulus may be performed by regression methods such as the probit model or logit model, or other methods such as the Spearman–Kärber method. Empirical models based on nonlinear regression are usually preferred over the use of some transformation of the data that linearizes the dose-response relationship. Hill coefficient The Hill coefficient is a measure of ultrasensitivity (i.e. how steep is the response curve). The Hill coefficient, or , may describe cooperativity (or possibly other biochemical properties, depending on the context in which the Hill equation is being used). When appropriate, the value of the Hill coefficient describes the cooperativity of ligand binding in the following way: . Positively cooperative binding: Once one ligand molecule is bound to the enzyme, its affinity for other ligand molecules increases. For example, the Hill coefficient of oxygen binding to haemoglobin (an example of positive cooperativity) falls within the range of 1.7–3.2. . Negatively cooperative binding: Once one ligand molecule is bound to the enzyme, its affinity for other ligand molecules decreases. . Noncooperative (completely independent) binding: The affinity of the enzyme for a ligand molecule is not dependent on whether or not other ligand molecules are already bound. When n=1, we obtain a model that can be modeled by Michaelis–Menten kinetics, in which , the Michaelis–Menten constant. The Hill coefficient can be calculated approximately in terms of the cooperativity index of Taketa and Pogell as follows: . where EC90 and EC10 are the input values needed to produce the 10% and 90% of the maximal response, respectively. Reversible form The most common form of the Hill equation is its irreversible form. However, when building computational models a reversible form is often required in order to model product inhibition. For this reason, Hofmeyr and Cornish-Bowden devised the reversible Hill equation. Relationship to the elasticity coefficients The Hill coefficient is also intimately connected to the elasticity coefficient where the Hill coefficient can be shown to equal: where is the fractional saturation, , and the elasticity coefficient. This is derived by taking the slope of the Hill equation: and expanding the slope using the quotient rule. The result shows that the elasticity can never exceed since the equation above can be rearranged to: Applications The Hill equation is used extensively in pharmacology to quantify the functional parameters of a drug and are also used in other areas of biochemistry. The Hill equation can be used to describe dose-response relationships, for example ion channel open-probability (P-open) vs. ligand concentration. Regulation of gene transcription The Hill equation can be applied in modelling the rate at which a gene product is produced when its parent gene is being regulated by transcription factors (e.g., activators and/or repressors). Doing so is appropriate when a gene is regulated by multiple binding sites for transcription factors, in which case the transcription factors may bind the DNA in a cooperative fashion. If the production of protein from gene is up-regulated (activated) by a transcription factor , then the rate of production of protein can be modeled as a differential equation in terms of the concentration of activated protein: , where is the maximal transcription rate of gene . Likewise, if the production of protein from gene is down-regulated (repressed) by a transcription factor , then the rate of production of protein can be modeled as a differential equation in terms of the concentration of activated protein: , where is the maximal transcription rate of gene . Limitations Because of its assumption that ligand molecules bind to a receptor simultaneously, the Hill equation has been criticized as a physically unrealistic model. Moreover, the Hill coefficient should not be considered a reliable approximation of the number of cooperative ligand binding sites on a receptor except when the binding of the first and subsequent ligands results in extreme positive cooperativity. Unlike more complex models, the relatively simple Hill equation provides little insight into underlying physiological mechanisms of protein-ligand interactions. This simplicity, however, is what makes the Hill equation a useful empirical model, since its use requires little a priori knowledge about the properties of either the protein or ligand being studied. Nevertheless, other, more complex models of cooperative binding have been proposed. For more information and examples of such models, see Cooperative binding. Global sensitivity measure such as Hill coefficient do not characterise the local behaviours of the s-shaped curves. Instead, these features are well captured by the response coefficient measure. There is a link between Hill Coefficient and Response coefficient, as follows. Altszyler et al. (2017) have shown that these ultrasensitivity measures can be linked. See also Binding coefficient Bjerrum plot Cooperative binding Gompertz curve Langmuir adsorption model Logistic function Michaelis–Menten kinetics Monod equation Notes References Further reading Dorland's Illustrated Medical Dictionary External links Hill equation calculator Enzyme kinetics Pharmacology
0.774091
0.995111
0.770307
Isoelectronicity
Isoelectronicity is a phenomenon observed when two or more molecules have the same structure (positions and connectivities among atoms) and the same electronic configurations, but differ by what specific elements are at certain locations in the structure. For example, , , and are isoelectronic, while and = are not. This definition is sometimes termed valence isoelectronicity. Definitions can sometimes be not as strict, sometimes requiring identity of the total electron count and with it the entire electronic configuration. More usually, definitions are broader, and may extend to allowing different numbers of atoms in the species being compared. The importance of the concept lies in identifying significantly related species, as pairs or series. Isoelectronic species can be expected to show useful consistency and predictability in their properties, so identifying a compound as isoelectronic with one already characterised offers clues to possible properties and reactions. Differences in properties such as electronegativity of the atoms in isolelectronic species can affect reactivity. In quantum mechanics, hydrogen-like atoms are ions with only one electron such as . These ions would be described as being isoelectronic with hydrogen. Examples The atom and the ion are isoelectronic because each has five valence electrons, or more accurately an electronic configuration of [He] 2s2 2p3. Similarly, the cations , , and and the anions , , and are all isoelectronic with the atom. , , , and are isoelectronic because each has two atoms triple bonded together, and due to the charge have analogous electronic configurations ( is identical in electronic configuration to so is identical electronically to ). Molecular orbital diagrams best illustrate isoelectronicity in diatomic molecules, showing how atomic orbital mixing in isoelectronic species results in identical orbital combination, and thus also bonding. More complex molecules can be polyatomic also. For example, the amino acids serine, cysteine, and selenocysteine are all isoelectronic to each other. They differ by which specific chalcogen is present at one location in the side-chain. (acetone) and (azomethane) are not isoelectronic. They do have the same number of electrons but they do not have the same structure. See also Isolobal principle References Theoretical chemistry
0.784914
0.981372
0.770292
Closed system
A closed system is a natural physical system that does not allow transfer of matter in or out of the system, althoughin the contexts of physics, chemistry, engineering, etc.the transfer of energy (e.g. as work or heat) is allowed. Physics In classical mechanics In nonrelativistic classical mechanics, a closed system is a physical system that does not exchange any matter with its surroundings, and is not subject to any net force whose source is external to the system. A closed system in classical mechanics would be equivalent to an isolated system in thermodynamics. Closed systems are often used to limit the factors that can affect the results of a specific problem or experiment. In thermodynamics In thermodynamics, a closed system can exchange energy (as heat or work) but not matter, with its surroundings. An isolated system cannot exchange any heat, work, or matter with the surroundings, while an open system can exchange energy and matter. (This scheme of definition of terms is not uniformly used, though it is convenient for some purposes. In particular, some writers use 'closed system' where 'isolated system' is used here.) For a simple system, with only one type of particle (atom or molecule), a closed system amounts to a constant number of particles. However, for systems which are undergoing a chemical reaction, there may be all sorts of molecules being generated and destroyed by the reaction process. In this case, the fact that the system is closed is expressed by stating that the total number of each elemental atom is conserved, no matter what kind of molecule it may be a part of. Mathematically: where is the number of j-type molecules, is the number of atoms of element in molecule and is the total number of atoms of element in the system, which remains constant, since the system is closed. There will be one such equation for each different element in the system. In thermodynamics, a closed system is important for solving complicated thermodynamic problems. It allows the elimination of some external factors that could alter the results of the experiment or problem thus simplifying it. A closed system can also be used in situations where thermodynamic equilibrium is required to simplify the situation. In quantum physics This equation, called Schrödinger's equation, describes the behavior of an isolated or closed quantum system, that is, by definition, a system which does not interchange information (i.e. energy and/or matter) with another system. So if an isolated system is in some pure state |ψ(t) ∈ H at time t, where H denotes the Hilbert space of the system, the time evolution of this state (between two consecutive measurements). where is the imaginary unit, is the Planck constant divided by , the symbol indicates a partial derivative with respect to time , (the Greek letter psi) is the wave function of the quantum system, and is the Hamiltonian operator (which characterizes the total energy of any given wave function and takes different forms depending on the situation). In chemistry In chemistry, a closed system is where no reactants or products can escape, only heat can be exchanged freely (e.g. an ice cooler). A closed system can be used when conducting chemical experiments where temperature is not a factor (i.e. reaching thermal equilibrium). In engineering In an engineering context, a closed system is a bound system, i.e. defined, in which every input is known and every resultant is known (or can be known) within a specific time. See also Glossary of systems theory Dynamical system Isolated system Open system (systems theory) Sense and Respond Thermodynamic system References Cybernetics Systems theory Thermodynamic systems
0.776025
0.992611
0.77029
Molecular model
A molecular model is a physical model of an atomistic system that represents molecules and their processes. They play an important role in understanding chemistry and generating and testing hypotheses. The creation of mathematical models of molecular properties and behavior is referred to as molecular modeling, and their graphical depiction is referred to as molecular graphics. The term, "molecular model" refer to systems that contain one or more explicit atoms (although solvent atoms may be represented implicitly) and where nuclear structure is neglected. The electronic structure is often also omitted unless it is necessary in illustrating the function of the molecule being modeled. Molecular models may be created for several reasons – as pedagogic tools for students or those unfamiliar with atomistic structures; as objects to generate or test theories (e.g., the structure of DNA); as analogue computers (e.g., for measuring distances and angles in flexible systems); or as aesthetically pleasing objects on the boundary of art and science. The construction of physical models is often a creative act, and many bespoke examples have been carefully created in the workshops of science departments. There is a very wide range of approaches to physical modeling, including ball-and-stick models available for purchase commercially, to molecular models created using 3D printers. The main strategy, initially in textbooks and research articles and more recently on computers. Molecular graphics has made the visualization of molecular models on computer hardware easier, more accessible, and inexpensive, although physical models are widely used to enhance the tactile and visual message being portrayed. History In the 1600s, Johannes Kepler speculated on the symmetry of snowflakes and the close packing of spherical objects such as fruit. The symmetrical arrangement of closely packed spheres informed theories of molecular structure in the late 1800s, and many theories of crystallography and solid state inorganic structure used collections of equal and unequal spheres to simulate packing and predict structure. John Dalton represented compounds as aggregations of circular atoms, and although Johann Josef Loschmidt did not create physical models, his diagrams based on circles are two-dimensional analogues of later models. August Wilhelm von Hofmann is credited with the first physical molecular model around 1860. Note how the size of the carbon appears smaller than the hydrogen. The importance of stereochemistry was not then recognised and the model is essentially topological (it should be a 3-dimensional tetrahedron). Jacobus Henricus van 't Hoff and Joseph Le Bel introduced the concept of chemistry in three dimensions of space, that is, stereochemistry. Van 't Hoff built tetrahedral molecules representing the three-dimensional properties of carbon. Models based on spheres Repeating units will help to show how easy it is and clear it is to represent molecules through balls that represent atoms. The binary compounds sodium chloride (NaCl) and caesium chloride (CsCl) have cubic structures but have different space groups. This can be rationalised in terms of close packing of spheres of different sizes. For example, NaCl can be described as close-packed chloride ions (in a face-centered cubic lattice) with sodium ions in the octahedral holes. After the development of X-ray crystallography as a tool for determining crystal structures, many laboratories built models based on spheres. With the development of plastic or polystyrene balls it is now easy to create such models. Models based on ball-and-stick The concept of the chemical bond as a direct link between atoms can be modelled by linking balls (atoms) with sticks/rods (bonds). This has been extremely popular and is still widely used today. Initially atoms were made of spherical wooden balls with specially drilled holes for rods. Thus carbon can be represented as a sphere with four holes at the tetrahedral angles cos−1(−) ≈ 109.47°. A problem with rigid bonds and holes is that systems with arbitrary angles could not be built. This can be overcome with flexible bonds, originally helical springs but now usually plastic. This also allows double and triple bonds to be approximated by multiple single bonds. The model shown to the left represents a ball-and-stick model of proline. The balls have colours: black represents carbon (C); red, oxygen (O); blue, nitrogen (N); and white, hydrogen (H). Each ball is drilled with as many holes as its conventional valence (C: 4; N: 3; O: 2; H: 1) directed towards the vertices of a tetrahedron. Single bonds are represented by (fairly) rigid grey rods. Double and triple bonds use two longer flexible bonds which restrict rotation and support conventional cis/trans stereochemistry. However, most molecules require holes at other angles and specialist companies manufacture kits and bespoke models. Besides tetrahedral, trigonal and octahedral holes, there were all-purpose balls with 24 holes. These models allowed rotation about the single rod bonds, which could be both an advantage (showing molecular flexibility) and a disadvantage (models are floppy). The approximate scale was 5 cm per ångström (0.5 m/nm or 500,000,000:1), but was not consistent over all elements. Arnold Beevers in Edinburgh created small models using PMMA balls and stainless steel rods. By using individually drilled balls with precise bond angles and bond lengths in these models, large crystal structures to be accurately created, but with light and rigid form. Figure 4 shows a unit cell of ruby in this style. Skeletal models Crick and Watson's DNA model and the protein-building kits of Kendrew were among the first skeletal models. These were based on atomic components where the valences were represented by rods; the atoms were points at the intersections. Bonds were created by linking components with tubular connectors with locking screws. André Dreiding introduced a molecular modelling kit in the late 1950s which dispensed with the connectors. A given atom would have solid and hollow valence spikes. The solid rods clicked into the tubes forming a bond, usually with free rotation. These were and are very widely used in organic chemistry departments and were made so accurately that interatomic measurements could be made by ruler. More recently, inexpensive plastic models (such as Orbit) use a similar principle. A small plastic sphere has protuberances onto which plastic tubes can be fitted. The flexibility of the plastic means that distorted geometries can be made. Polyhedral models Many inorganic solids consist of atoms surrounded by a coordination sphere of electronegative atoms (e.g. PO4 tetrahedra, TiO6 octahedra). Structures can be modelled by gluing together polyhedra made of paper or plastic. Composite models A good example of composite models is the Nicholson approach, widely used from the late 1970s for building models of biological macromolecules. The components are primarily amino acids and nucleic acids with preformed residues representing groups of atoms. Many of these atoms are directly moulded into the template, and fit together by pushing plastic stubs into small holes. The plastic grips well and makes bonds difficult to rotate, so that arbitrary torsion angles can be set and retain their value. The conformations of the backbone and side chains are determined by pre-computing the torsion angles and then adjusting the model with a protractor. The plastic is white and can be painted to distinguish between O and N atoms. Hydrogen atoms are normally implicit and modelled by snipping off the spokes. A model of a typical protein with approximately 300 residues could take a month to build. It was common for laboratories to build a model for each protein solved. By 2005, so many protein structures were being determined that relatively few models were made. Computer-based models With the development of computer-based physical modelling, it is now possible to create complete single-piece models by feeding the coordinates of a surface into the computer. Figure 6 shows models of anthrax toxin, left (at a scale of approximately 20 Å/cm or 1:5,000,000) and green fluorescent protein, right (5 cm high, at a scale of about 4 Å/cm or 1:25,000,000) from 3D Molecular Design. Models are made of plaster or starch, using a rapid prototyping process. It has also recently become possible to create accurate molecular models inside glass blocks using a technique known as subsurface laser engraving. The image at right shows the 3D structure of an E. coli protein (DNA polymerase beta-subunit, PDB code 1MMI) etched inside a block of glass by British company Luminorum Ltd. Computational Models Computers can also model molecules mathematically. Programs such as Avogadro can run on typical desktops and can predict bond lengths and angles, molecular polarity and charge distribution, and even quantum mechanical properties such as absorption and emission spectra. However, these sorts of programs cannot model molecules as more atoms are added, because the number of calculations is quadratic in the number of atoms involved; if four times as many atoms are used in a molecule, the calculations with take 16 times as long. For most practical purposes, such as drug design or protein folding, the calculations of a model require supercomputing or cannot be done on classical computers at all in a reasonable amount of time. Quantum computers can model molecules with fewer calculations because the type of calculations performed in each cycle by a quantum computer are well-suited to molecular modelling. Common colors Some of the most common colors used in molecular models are as follows: {| class="wikitable" |Hydrogen |bgcolor=white| ||white |- |Alkali metals |bgcolor=darkviolet| ||violet |- |Alkaline earth metals |bgcolor=darkgreen| ||dark green |- |Boron, most transition metals |bgcolor=pink| ||Pink |- |Carbon |bgcolor=black| ||black |- |Nitrogen |bgcolor=blue| ||blue |- |Oxygen |bgcolor=red| ||red |- |Fluorine |bgcolor=chartreuse| ||green yellow |- |Chlorine |bgcolor=limegreen| ||lime green |- |Bromine |bgcolor=darkred| ||dark red |- |Iodine |bgcolor=purple| ||dark violet |- |Noble gases |bgcolor=cyan| ||cyan |- |Phosphorus |bgcolor=darkorange| ||orange |- |Sulfur |bgcolor=yellow| ||yellow |- |Titanium |bgcolor=gray| ||gray |- |Copper |bgcolor=ff8866| ||apricot |- |Mercury |bgcolor=lightgrey| ||light grey |} Chronology This table is an incomplete chronology of events where physical molecular models provided major scientific insights. See also Molecular design software Molecular graphics Molecular modelling Ribbon diagram Software for molecular mechanics modeling Space-filling (Calotte) model References Further reading history of molecular models Paper presented at the EuroScience Open Forum (ESOF), Stockholm on August 25, 2004, W. Gerhard Pohl, Austrian Chemical Society. Photo of van't Hoff's tetrahedral models, and Loschmidt's organic formulae (only 2-dimensional). Wooster's biographical notes including setting up of Crystal Structure Ltd. External links History of Visualization of Biological Macromolecules by Eric Martz and Eric Francoeur. Contains a mixture of physical models and molecular graphics. Model
0.786876
0.978911
0.770282
Organosulfur chemistry
Organosulfur chemistry is the study of the properties and synthesis of organosulfur compounds, which are organic compounds that contain sulfur. They are often associated with foul odors, but many of the sweetest compounds known are organosulfur derivatives, e.g., saccharin. Nature is abound with organosulfur compounds—sulfur is vital for life. Of the 20 common amino acids, two (cysteine and methionine) are organosulfur compounds, and the antibiotics penicillin and sulfa drugs both contain sulfur. While sulfur-containing antibiotics save many lives, sulfur mustard is a deadly chemical warfare agent. Fossil fuels, coal, petroleum, and natural gas, which are derived from ancient organisms, necessarily contain organosulfur compounds, the removal of which is a major focus of oil refineries. Sulfur shares the chalcogen group with oxygen, selenium, and tellurium, and it is expected that organosulfur compounds have similarities with carbon–oxygen, carbon–selenium, and carbon–tellurium compounds. A classical chemical test for the detection of sulfur compounds is the Carius halogen method. Structural classes Organosulfur compounds can be classified according to the sulfur-containing functional groups, which are listed (approximately) in decreasing order of their occurrence. Sulfides Sulfides, formerly known as thioethers, are characterized by C−S−C bonds Relative to C−C bonds, C−S bonds are both longer, because sulfur atoms are larger than carbon atoms, and about 10% weaker. Representative bond lengths in sulfur compounds are 183 pm for the S−C single bond in methanethiol and 173 pm in thiophene. The C−S bond dissociation energy for thiomethane is 89 kcal/mol (370 kJ/mol) compared to methane's 100 kcal/mol (420 kJ/mol) and when hydrogen is replaced by a methyl group the energy decreases to 73 kcal/mol (305 kJ/mol). The single carbon to oxygen bond is shorter than that of the C−C bond. The bond dissociation energies for dimethyl sulfide and dimethyl ether are respectively 73 and 77 kcal/mol (305 and 322 kJ/mol). Sulfides are typically prepared by alkylation of thiols. Alkylating agents include not only alkyl halides, but also epoxides, aziridines, and Michael acceptors. They can also be prepared via the Pummerer rearrangement. In the Ferrario reaction, phenyl ether is converted to phenoxathiin by action of elemental sulfur and aluminium chloride. Thioacetals and thioketals feature C−S−C−S−C bond sequence. They represent a subclass of sulfides. The thioacetals are useful in "umpolung" of carbonyl groups. Thioacetals and thioketals can also be used to protect a carbonyl group in organic syntheses. The above classes of sulfur compounds also exist in saturated and unsaturated heterocyclic structures, often in combination with other heteroatoms, as illustrated by thiiranes, thiirenes, thietanes, thietes, dithietanes, thiolanes, thianes, dithianes, thiepanes, thiepines, thiazoles, isothiazoles, and thiophenes, among others. The latter three compounds represent a special class of sulfur-containing heterocycles that are aromatic. The resonance stabilization of thiophene is 29 kcal/mol (121 kJ/mol) compared to 20 kcal/mol (84 kJ/mol) for the oxygen analogue furan. The reason for this difference is the higher electronegativity for oxygen drawing away electrons to itself at the expense of the aromatic ring current. Yet as an aromatic substituent the thio group is less electron-releasing than the alkoxy group. Dibenzothiophenes (see drawing), tricyclic heterocycles consisting of two benzene rings fused to a central thiophene ring, occurs widely in heavier fractions of petroleum. Thiols, disulfides, polysulfides Thiol groups contain the functionality R−SH. Thiols are structurally similar to the alcohol group, but these functionalities are very different in their chemical properties. Thiols are more nucleophilic, more acidic, and more readily oxidized. This acidity can differ by 5 pKa units. The difference in electronegativity between sulfur (2.58) and hydrogen (2.20) is small and therefore hydrogen bonding in thiols is not prominent. Aliphatic thiols form monolayers on gold, which are topical in nanotechnology. Certain aromatic thiols can be accessed through a Herz reaction. Disulfides R−S−S−R with a covalent sulfur to sulfur bond are important for crosslinking: in biochemistry for the folding and stability of some proteins and in polymer chemistry for the crosslinking of rubber. Longer sulfur chains are also known, such as in the natural product varacin which contains an unusual pentathiepin ring (5-sulfur chain cyclised onto a benzene ring). Thioesters Thioesters have general structure R−C(O)−S−R. They are related to regular esters (R−C(O)−O−R) but are more susceptible to hydrolysis and related reactions. Thioesters formed from coenzyme A are prominent in biochemistry, especially in fatty acid synthesis. Sulfoxides, sulfones and thiosulfinates A sulfoxide, R−S(O)−R, is the S-oxide of a sulfide ("sulfide oxide"), a sulfone, R−S(O)2−R, is the S,S-dioxide of a sulfide, a thiosulfinate, R−S(O)−S−R, is the S-oxide of a disulfide, and a thiosulfonate, R−S(O)2−S−R, is the S,S-dioxide of a disulfide. All of these compounds are well known with extensive chemistry, e.g., dimethyl sulfoxide, dimethyl sulfone, and allicin (see drawing). Sulfimides, sulfoximides, sulfonediimines Sulfimides (also called a sulfilimines) are sulfur–nitrogen compounds of structure R2S=NR′, the nitrogen analog of sulfoxides. They are of interest in part due to their pharmacological properties. When two different R groups are attached to sulfur, sulfimides are chiral. Sulfimides form stable α-carbanions. Sulfoximides (also called sulfoximines) are tetracoordinate sulfur–nitrogen compounds, isoelectronic with sulfones, in which one oxygen atom of the sulfone is replaced by a substituted nitrogen atom, e.g., R2S(O)=NR′. When two different R groups are attached to sulfur, sulfoximides are chiral. Much of the interest in this class of compounds is derived from the discovery that methionine sulfoximide (methionine sulfoximine) is an inhibitor of glutamine synthetase. Sulfonediimines (also called sulfodiimines, sulfodiimides or sulfonediimides) are tetracoordinate sulfur–nitrogen compounds, isoelectronic with sulfones, in which both oxygen atoms of the sulfone are replaced by a substituted nitrogen atom, e.g., R2S(=NR′)2. They are of interest because of their biological activity and as building blocks for heterocycle synthesis. S-Nitrosothiols S-Nitrosothiols, also known as thionitrites, are compounds containing a nitroso group attached to the sulfur atom of a thiol, e.g. R−S−N=O. They have received considerable attention in biochemistry because they serve as donors of the nitrosonium ion, NO+, and nitric oxide, NO, which may serve as signaling molecules in living systems, especially related to vasodilation. Sulfur halides A wide range of organosulfur compounds are known which contain one or more halogen atom ("X" in the chemical formulas that follow) bonded to a single sulfur atom, e.g.: sulfenyl halides, RSX; sulfinyl halides, RS(O)X; sulfonyl halides, RSO2X; alkyl and arylsulfur trichlorides, RSCl3 and trifluorides, RSF3; and alkyl and arylsulfur pentafluorides, RSF5. Less well known are dialkylsulfur tetrahalides, mainly represented by the tetrafluorides, e.g., R2SF4. Thioketones, thioaldehydes, and related compounds Compounds with double bonds between carbon and sulfur are relatively uncommon, but include the important compounds carbon disulfide, carbonyl sulfide, and thiophosgene. Thioketones (RC(=S)R′) are uncommon with alkyl substituents, but one example is thiobenzophenone. Thioaldehydes are rarer still, reflecting their lack of steric protection ("thioformaldehyde" exists as a cyclic trimer). Thioamides, with the formula R1C(=S)N(R2)R3 are more common. They are typically prepared by the reaction of amides with Lawesson's reagent. Isothiocyanates, with formula R−N=C=S, are found naturally. Vegetable foods with characteristic flavors due to isothiocyanates include wasabi, horseradish, mustard, radish, Brussels sprouts, watercress, nasturtiums, and capers. S-Oxides and S,S-dioxides of thiocarbonyl compounds The S-oxides of thiocarbonyl compounds are known as thiocarbonyl S-oxides: (R2C=S=O, and thiocarbonyl S,S-dioxides or sulfenes, R2C=SO2). The thione S-oxides have also been known as sulfines, and while IUPAC considers this term obsolete, the name persists in the literature. These compounds are well known with extensive chemistry. Examples include syn-propanethial-S-oxide and sulfene. Triple bonds between carbon and sulfur Triple bonds between sulfur and carbon in sulfaalkynes are rare and can be found in carbon monosulfide (CS) and have been suggested for the compounds F3CCSF3 and F5SCSF3. The compound HCSOH is also represented as having a formal triple bond. Thiocarboxylic acids and thioamides Thiocarboxylic acids (RC(O)SH) and dithiocarboxylic acids (RC(S)SH) are well known. They are structurally similar to carboxylic acids but more acidic. Thioamides are analogous to amides. Sulfonic, sulfinic and sulfenic acids, esters, amides, and related compounds Sulfonic acids have functionality R−S(=O)2−OH. They are strong acids that are typically soluble in organic solvents. Sulfonic acids like trifluoromethanesulfonic acid is a frequently used reagent in organic chemistry. Sulfinic acids have functionality R−S(O)−OH while sulfenic acids have functionality R−S−OH. In the series sulfonic—sulfinic—sulfenic acids, both the acid strength and stability diminish in that order. Sulfonamides, sulfinamides and sulfenamides, with formulas R−SO2NR′2, R−S(O)NR′2, and R−SNR′2, respectively, each have a rich chemistry. For example, sulfa drugs are sulfonamides derived from aromatic sulfonation. Chiral sulfinamides are used in asymmetric synthesis, while sulfenamides are used extensively in the vulcanization process to assist cross-linking. Thiocyanates, R−S−CN, are related to sulfenyl halides and esters in terms of reactivity. Sulfonium, oxosulfonium and related salts A sulfonium ion is a positively charged ion featuring three organic substituents attached to sulfur, with the formula [R3S]+. Together with their negatively charged counterpart, the anion, the compounds are called sulfonium salts. An oxosulfonium ion is a positively charged ion featuring three organic substituents and an oxygen attached to sulfur, with the formula [R3S=O]+. Together with their negatively charged counterpart, the anion, the compounds are called oxosulfonium salts. Related species include alkoxysulfonium and chlorosulfonium ions, [R2SOR]+ and [R2SCl]+, respectively. Sulfonium, oxosulfonium and thiocarbonyl ylides Deprotonation of sulfonium and oxosulfonium salts affords ylides, of structure R2S+−C−−R′2 and R2S(O)+−C−−R′2. While sulfonium ylides, for instance in the Johnson–Corey–Chaykovsky reaction used to synthesize epoxides, are sometimes drawn with a C=S double bond, e.g., R2S=CR′2, the ylidic carbon–sulfur bond is highly polarized and is better described as being ionic. Sulfonium ylides are key intermediates in the synthetically useful Stevens rearrangement. Thiocarbonyl ylides (RR′C=S+−C−−RR′) can form by ring-opening of thiiranes, photocyclization of aryl vinyl sulfides, as well as by other processes. Sulfuranes and persulfuranes Sulfuranes are relatively specialized functional group that feature tetravalent sulfur, with the formula SR4 Likewise, persulfuranes feature hexavalent SR6. One of the few all-carbon persulfuranes has two methyl and two biphenylene ligands: It is prepared from the corresponding sulfurane 1 with xenon difluoride / boron trifluoride in acetonitrile to the sulfuranyl dication 2 followed by reaction with methyllithium in tetrahydrofuran to (a stable) persulfurane 3 as the cis isomer. X-ray diffraction shows C−S bond lengths ranging between 189 and 193 pm (longer than the standard bond length) with the central sulfur atom in a distorted octahedral molecular geometry. Organosulfur compounds in nature A variety of organosulfur compounds occur in nature. Most abundant are the amino acids methionine, cysteine, and cystine. The vitamins biotin and thiamine, as well as lipoic acid contain sulfur heterocycles. Glutathione is the primary intracellular antioxidant. Penicillin and cephalosporin are life-saving antibiotics, derived from fungi. Gliotoxin is a sulfur-containing mycotoxin produced by several species of fungi under investigation as an antiviral agent. In fossil fuels Common organosulfur compounds present in petroleum fractions at the level of 200–500 ppm. Common compounds are thiophenes, especially dibenzothiophenes. By the process of hydrodesulfurization (HDS) in refineries, these compounds are removed as illustrated by the hydrogenolysis of thiophene: Flavor and odor Compounds like allicin and ajoene are responsible for the odor of garlic. Lenthionine contributes to the flavor of shiitake mushrooms. Volatile organosulfur compounds also contribute subtle flavor characteristics to wine, nuts, cheddar cheese, chocolate, coffee, and tropical fruit flavors. Many of these natural products also have important medicinal properties such as preventing platelet aggregation or fighting cancer. Humans and other animals have an exquisitely sensitive sense of smell toward the odor of low-valent organosulfur compounds such as thiols, sulfides, and disulfides. Malodorous volatile thiols are protein-degradation products found in putrid food, so sensitive identification of these compounds is crucial to avoiding intoxication. Low-valent volatile sulfur compounds are also found in areas where oxygen levels in the air are low, posing a risk of suffocation. Copper is required for the highly sensitive detection of certain volatile thiols and related organosulfur compounds by olfactory receptors in mice. Whether humans, too, require copper for sensitive detection of thiols is not yet known. References Soil chemistry Foul-smelling chemicals
0.78713
0.978591
0.770278
Isoelectric point
The isoelectric point (pI, pH(I), IEP), is the pH at which a molecule carries no net electrical charge or is electrically neutral in the statistical mean. The standard nomenclature to represent the isoelectric point is pH(I). However, pI is also used. For brevity, this article uses pI. The net charge on the molecule is affected by pH of its surrounding environment and can become more positively or negatively charged due to the gain or loss, respectively, of protons (H+). Surfaces naturally charge to form a double layer. In the common case when the surface charge-determining ions are H+/HO−, the net surface charge is affected by the pH of the liquid in which the solid is submerged. The pI value can affect the solubility of a molecule at a given pH. Such molecules have minimum solubility in water or salt solutions at the pH that corresponds to their pI and often precipitate out of solution. Biological amphoteric molecules such as proteins contain both acidic and basic functional groups. Amino acids that make up proteins may be positive, negative, neutral, or polar in nature, and together give a protein its overall charge. At a pH below their pI, proteins carry a net positive charge; above their pI they carry a net negative charge. Proteins can, thus, be separated by net charge in a polyacrylamide gel using either preparative native PAGE, which uses a constant pH to separate proteins, or isoelectric focusing, which uses a pH gradient to separate proteins. Isoelectric focusing is also the first step in 2-D gel polyacrylamide gel electrophoresis. In biomolecules, proteins can be separated by ion exchange chromatography. Biological proteins are made up of zwitterionic amino acid compounds; the net charge of these proteins can be positive or negative depending on the pH of the environment. The specific pI of the target protein can be used to model the process around and the compound can then be purified from the rest of the mixture. Buffers of various pH can be used for this purification process to change the pH of the environment. When a mixture containing a target protein is loaded into an ion exchanger, the stationary matrix can be either positively-charged (for mobile anions) or negatively-charged (for mobile cations). At low pH values, the net charge of most proteins in the mixture is positive – in cation exchangers, these positively-charged proteins bind to the negatively-charged matrix. At high pH values, the net charge of most proteins is negative, where they bind to the positively-charged matrix in anion exchangers. When the environment is at a pH value equal to the protein's pI, the net charge is zero, and the protein is not bound to any exchanger, and therefore, can be eluted out. Calculating pI values For an amino acid with only one amine and one carboxyl group, the pI can be calculated from the mean of the pKas of this molecule. The pH of an electrophoretic gel is determined by the buffer used for that gel. If the pH of the buffer is above the pI of the protein being run, the protein will migrate to the positive pole (negative charge is attracted to a positive pole). If the pH of the buffer is below the pI of the protein being run, the protein will migrate to the negative pole of the gel (positive charge is attracted to the negative pole). If the protein is run with a buffer pH that is equal to the pI, it will not migrate at all. This is also true for individual amino acids. Examples In the two examples (on the right) the isoelectric point is shown by the green vertical line. In glycine the pK values are separated by nearly 7 units. Thus in the gas phase, the concentration of the neutral species, glycine (GlyH), is effectively 100% of the analytical glycine concentration. Glycine may exist as a zwitterion at the isoelectric point, but the equilibrium constant for the isomerization reaction in solution H2NCH2CO2H <=> H3N+CH2CO2- is not known. The other example, adenosine monophosphate is shown to illustrate the fact that a third species may, in principle, be involved. In fact the concentration of is negligible at the isoelectric point in this case. If the pI is greater than the pH, the molecule will have a positive charge. Peptides and proteins A number of algorithms for estimating isoelectric points of peptides and proteins have been developed. Most of them use Henderson–Hasselbalch equation with different pK values. For instance, within the model proposed by Bjellqvist and co-workers, the pKs were determined between closely related immobilines by focusing the same sample in overlapping pH gradients. Some improvements in the methodology (especially in the determination of the pK values for modified amino acids) have been also proposed. More advanced methods take into account the effect of adjacent amino acids ±3 residues away from a charged aspartic or glutamic acid, the effects on free C terminus, as well as they apply a correction term to the corresponding pK values using genetic algorithm. Other recent approaches are based on a support vector machine algorithm and pKa optimization against experimentally known protein/peptide isoelectric points. Moreover, experimentally measured isoelectric point of proteins were aggregated into the databases. Recently, a database of isoelectric points for all proteins predicted using most of the available methods had been also developed. In practice, a protein with an excess of basic aminoacids (arginine, lysine and/or histidine) will bear an isoelectric point roughly greater than 7 (basic), while a protein with an excess of acidic aminoacids (aspartic acid and/or glutamic acid) will often have an isoelectric point lower than 7 (acidic). The electrophoretic linear (horizontal) separation of proteins by Ip along a pH gradient in a polyacrylamide gel (also known as isoelectric focusing), followed by a standard molecular weight linear (vertical) separation in a second polyacrylamide gel (SDS-PAGE), constitutes the so called two-dimensional gel electrophoresis or PAGE 2D. This technique allows a thorough separation of proteins as distinct "spots", with proteins of high molecular weight and low Ip migrating to the upper-left part of the bidimensional gel, while proteins with low molecular weight and high Ip locate to the bottom-right region of the same gel. Ceramic materials The isoelectric points (IEP) of metal oxide ceramics are used extensively in material science in various aqueous processing steps (synthesis, modification, etc.). In the absence of chemisorbed or physisorbed species particle surfaces in aqueous suspension are generally assumed to be covered with surface hydroxyl species, M-OH (where M is a metal such as Al, Si, etc.). At pH values above the IEP, the predominant surface species is M-O−, while at pH values below the IEP, M-OH2+ species predominate. Some approximate values of common ceramics are listed below: Note: The following list gives the isoelectric point at 25 °C for selected materials in water. The exact value can vary widely, depending on material factors such as purity and phase as well as physical parameters such as temperature. Moreover, the precise measurement of isoelectric points can be difficult, thus many sources often cite differing values for isoelectric points of these materials. Mixed oxides may exhibit isoelectric point values that are intermediate to those of the corresponding pure oxides. For example, a synthetically prepared amorphous aluminosilicate (Al2O3-SiO2) was initially measured as having IEP of 4.5 (the electrokinetic behavior of the surface was dominated by surface Si-OH species, thus explaining the relatively low IEP value). Significantly higher IEP values (pH 6 to 8) have been reported for 3Al2O3-2SiO2 by others. Similarly, also IEP of barium titanate, BaTiO3 was reported in the range 5–6 while others got a value of 3. Mixtures of titania (TiO2) and zirconia (ZrO2) were studied and found to have an isoelectric point between 5.3–6.9, varying non-linearly with %(ZrO2). The surface charge of the mixed oxides was correlated with acidity. Greater titania content led to increased Lewis acidity, whereas zirconia-rich oxides displayed Br::onsted acidity. The different types of acidities produced differences in ion adsorption rates and capacities. Versus point of zero charge The terms isoelectric point (IEP) and point of zero charge (PZC) are often used interchangeably, although under certain circumstances, it may be productive to make the distinction. In systems in which H+/OH− are the interface potential-determining ions, the point of zero charge is given in terms of pH. The pH at which the surface exhibits a neutral net electrical charge is the point of zero charge at the surface. Electrokinetic phenomena generally measure zeta potential, and a zero zeta potential is interpreted as the point of zero net charge at the shear plane. This is termed the isoelectric point. Thus, the isoelectric point is the value of pH at which the colloidal particle remains stationary in an electrical field. The isoelectric point is expected to be somewhat different from the point of zero charge at the particle surface, but this difference is often ignored in practice for so-called pristine surfaces, i.e., surfaces with no specifically adsorbed positive or negative charges. In this context, specific adsorption is understood as adsorption occurring in a Stern layer or chemisorption. Thus, point of zero charge at the surface is taken as equal to isoelectric point in the absence of specific adsorption on that surface. According to Jolivet, in the absence of positive or negative charges, the surface is best described by the point of zero charge. If positive and negative charges are both present in equal amounts, then this is the isoelectric point. Thus, the PZC refers to the absence of any type of surface charge, while the IEP refers to a state of neutral net surface charge. The difference between the two, therefore, is the quantity of charged sites at the point of net zero charge. Jolivet uses the intrinsic surface equilibrium constants, pK− and pK+ to define the two conditions in terms of the relative number of charged sites: For large ΔpK (>4 according to Jolivet), the predominant species is MOH while there are relatively few charged species – so the PZC is relevant. For small values of ΔpK, there are many charged species in approximately equal numbers, so one speaks of the IEP. See also Electrophoretic deposition Henderson-Hasselbalch equation Isoelectric focusing Isoionic point pK acid dissociation constant Preparative native PAGE Zeta potential References Further reading Nelson DL, Cox MM (2004). Lehninger Principles of Biochemistry. W. H. Freeman; 4th edition (Hardcover). Kosmulski M. (2009). Surface Charging and Points of Zero Charge. CRC Press; 1st edition (Hardcover). External links IPC – Isoelectric Point Calculator — calculate protein isoelectric point using over 15 methods prot pi – protein isoelectric point — an online program for calculating pI of proteins (include multiple subunits and posttranslational modifications) CurTiPot — a suite of spreadsheets for computing acid-base equilibria (charge versus pH plot of amphoteric molecules e.g., amino acids) pICalculax — Isoelectric point (pI) predictor for chemically modified peptides and proteins SWISS-2DPAGE — a database of isoelectric points coming from two-dimensional polyacrylamide gel electrophoresis (~ 2,000 proteins) PIP-DB — a Protein Isoelectric Point database (~ 5,000 proteins) Proteome-pI — a proteome isoelectric point database (predicted isoelectric point for all proteins) Ions Molecular biology
0.77541
0.993358
0.77026
Water of crystallization
In chemistry, water(s) of crystallization or water(s) of hydration are water molecules that are present inside crystals. Water is often incorporated in the formation of crystals from aqueous solutions. In some contexts, water of crystallization is the total mass of water in a substance at a given temperature and is mostly present in a definite (stoichiometric) ratio. Classically, "water of crystallization" refers to water that is found in the crystalline framework of a metal complex or a salt, which is not directly bonded to the metal cation. Upon crystallization from water, or water-containing solvents, many compounds incorporate water molecules in their crystalline frameworks. Water of crystallization can generally be removed by heating a sample but the crystalline properties are often lost. Compared to inorganic salts, proteins crystallize with large amounts of water in the crystal lattice. A water content of 50% is not uncommon for proteins. Applications Knowledge of hydration is essential for calculating the masses for many compounds. The reactivity of many salt-like solids is sensitive to the presence of water. The hydration and dehydration of salts is central to the use of phase-change materials for energy storage. Position in the crystal structure A salt with associated water of crystallization is known as a hydrate. The structure of hydrates can be quite elaborate, because of the existence of hydrogen bonds that define polymeric structures. Historically, the structures of many hydrates were unknown, and the dot in the formula of a hydrate was employed to specify the composition without indicating how the water is bound. Per IUPAC's recommendations, the middle dot is not surrounded by spaces when indicating a chemical adduct. Examples: – copper(II) sulfate pentahydrate – cobalt(II) chloride hexahydrate – tin(II) (or stannous) chloride dihydrate For many salts, the exact bonding of the water is unimportant because the water molecules are made labile upon dissolution. For example, an aqueous solution prepared from and anhydrous behave identically. Therefore, knowledge of the degree of hydration is important only for determining the equivalent weight: one mole of weighs more than one mole of . In some cases, the degree of hydration can be critical to the resulting chemical properties. For example, anhydrous is not soluble in water and is relatively useless in organometallic chemistry whereas is versatile. Similarly, hydrated is a poor Lewis acid and thus inactive as a catalyst for Friedel-Crafts reactions. Samples of must therefore be protected from atmospheric moisture to preclude the formation of hydrates. Crystals of hydrated copper(II) sulfate consist of centers linked to ions. Copper is surrounded by six oxygen atoms, provided by two different sulfate groups and four molecules of water. A fifth water resides elsewhere in the framework but does not bind directly to copper. The cobalt chloride mentioned above occurs as and . In tin chloride, each Sn(II) center is pyramidal (mean angle is 83°) being bound to two chloride ions and one water. The second water in the formula unit is hydrogen-bonded to the chloride and to the coordinated water molecule. Water of crystallization is stabilized by electrostatic attractions, consequently hydrates are common for salts that contain +2 and +3 cations as well as −2 anions. In some cases, the majority of the weight of a compound arises from water. Glauber's salt, , is a white crystalline solid with greater than 50% water by weight. Consider the case of nickel(II) chloride hexahydrate. This species has the formula . Crystallographic analysis reveals that the solid consists of subunits that are hydrogen bonded to each other as well as two additional molecules of . Thus one third of the water molecules in the crystal are not directly bonded to , and these might be termed "water of crystallization". Analysis The water content of most compounds can be determined with a knowledge of its formula. An unknown sample can be determined through thermogravimetric analysis (TGA) where the sample is heated strongly, and the accurate weight of a sample is plotted against the temperature. The amount of water driven off is then divided by the molar mass of water to obtain the number of molecules of water bound to the salt. Other solvents of crystallization Water is particularly common solvent to be found in crystals because it is small and polar. But all solvents can be found in some host crystals. Water is noteworthy because it is reactive, whereas other solvents such as benzene are considered to be chemically innocuous. Occasionally more than one solvent is found in a crystal, and often the stoichiometry is variable, reflected in the crystallographic concept of "partial occupancy". It is common and conventional for a chemist to "dry" a sample with a combination of vacuum and heat "to constant weight". For other solvents of crystallization, analysis is conveniently accomplished by dissolving the sample in a deuterated solvent and analyzing the sample for solvent signals by NMR spectroscopy. Single crystal X-ray crystallography is often able to detect the presence of these solvents of crystallization as well. Other methods may be currently available. Table of crystallization water in some inorganic halides In the table below are indicated the number of molecules of water per metal in various salts. Examples are rare for second and third row metals. No entries exist for Mo, W, Tc, Ru, Os, Rh, Ir, Pd, Hg, Au. AuCl3(H2O) has been invoked but its crystal structure has not been reported. Hydrates of metal sulfates Transition metal sulfates form a variety of hydrates, each of which crystallizes in only one form. The sulfate group often binds to the metal, especially for those salts with fewer than six aquo ligands. The heptahydrates, which are often the most common salts, crystallize as monoclinic and the less common orthorhombic forms. In the heptahydrates, one water is in the lattice and the other six are coordinated to the ferrous center. Many of the metal sulfates occur in nature, being the result of weathering of mineral sulfides. Many monohydrates are known. Hydrates of metal nitrates Transition metal nitrates form a variety of hydrates. The nitrate anion often binds to the metal, especially for those salts with fewer than six aquo ligands. Nitrates are uncommon in nature, so few minerals are represented here. Hydrated ferrous nitrate has not been characterized crystallographically. Photos See also Hydrate Mineral hydration Hydrous oxide References Crystallography Hydrates
0.777615
0.990536
0.770256
Petrifaction
In geology, petrifaction or petrification is the process by which organic material becomes a fossil through the replacement of the original material and the filling of the original pore spaces with minerals. Petrified wood typifies this process, but all organisms, from bacteria to vertebrates, can become petrified (although harder, more durable matter such as bone, beaks, and shells survive the process better than softer remains such as muscle tissue, feathers, or skin). Petrification takes place through a combination of two similar processes: permineralization and replacement. These processes create replicas of the original specimen that are similar down to the microscopic level. Processes Permineralization One of the processes involved in petrifaction is permineralization. The fossils created through this process tend to contain a large amount of the original material of the specimen. This process occurs when groundwater containing dissolved minerals (most commonly quartz, calcite, apatite (calcium phosphate), siderite (iron carbonate), and pyrite), fills pore spaces and cavities of specimens, particularly bone, shell or wood. The pores of the organisms' tissues are filled when these minerals precipitate out of the water. Two common types of permineralization are silicification and pyritization. Silicification Silicification is the process in which organic matter becomes saturated with silica. A common source of silica is volcanic material. Studies have shown that in this process, most of the original organic matter is destroyed. Silicification most often occurs in two environments—either the specimen is buried in sediments of deltas and floodplains or organisms are buried in volcanic ash. Water must be present for silicification to occur because it reduces the amount of oxygen present and therefore reduces the deterioration of the organism by fungi, maintains organism shape, and allows for the transportation and deposition of silica. The process begins when a specimen is permeated with an aqueous silica solution. The cell walls of the specimen are progressively dissolved and silica is deposited into the empty spaces. In wood samples, as the process proceeds, cellulose and lignin, two components of wood, are degraded and replaced with silica. The specimen is transformed to stone (a process called lithification) as water is lost. For silicification to occur, the geothermic conditions must include a neutral to slightly acidic pH and a temperature and pressure similar to shallow-depth sedimentary environments. Under ideal natural conditions, silicification can occur at rates approaching those seen in artificial petrification. Pyritization Pyritization is a process similar to silicification, but instead involves the deposition of iron and sulfur in the pores and cavities of an organism. Pyritization can result in both solid fossils as well as preserved soft tissues. In marine environments, pyritization occurs when organisms are buried in sediments containing a high concentration of iron sulfides. Organisms release sulfide, which reacts with dissolved iron in the surrounding water, when they decay. This reaction between iron and sulfides forms pyrite (FeS2). Carbonate shell material of the organism is then replaced with pyrite due to a higher concentration of pyrite and a lower concentration of carbonate in the surrounding water. Pyritization occurs to a lesser extent in plants in clay environments. Replacement Replacement, the second process involved in petrifaction, occurs when water containing dissolved minerals dissolves the original solid material of an organism, which is then replaced by minerals. This can take place extremely slowly, replicating the microscopic structure of the organism. The slower the rate of the process, the better defined the microscopic structure will be. The minerals commonly involved in replacement are calcite, silica, pyrite, and hematite. Biotic remains preserved by replacement alone (as opposed to in combination with permineralization) are rarely found, but these fossils present significance to paleontology because they tend to be more detailed. Uses Not only are the fossils produced through the process of petrifaction used for paleontological study, but they have also been used as both decorative and informative pieces. Petrified wood is used in several ways. Slabs of petrified wood can be crafted into tabletops, or the slabs themselves are sometimes displayed in a decorative fashion. Also, larger pieces of the wood have been carved into sinks and basins. Other large pieces can also be crafted into chairs and stools. Petrified wood and other petrified organisms have also been used in jewelry, sculpture, clock-making, ashtrays and fruit bowls, and landscape and garden decorations. Architecture Petrified wood has also been used in construction. The Petrified Wood Gas Station, located on Main St Lamar, Colorado, was built in 1932 and consists of walls and floors constructed from pieces of petrified wood. The structure, built by W.G. Brown, has since been converted to office space and a used car dealership. Glen Rose, Texas provides even more examples of the use of fossilized wood in architecture. Beginning in the 1920s, the farmers of Somervell County, Texas began uncovering petrified trees. Local craftsmen and masons then built over 65 structures from this petrified wood, 45 of which were still standing as of June 2009. These structures include gas stations, flowerbeds, cottages, restaurants, fountains and gateposts. Glen Rose, Texas is also noted for Dinosaur Valley State Park and the Glen Rose Formation, where fossilized dinosaur footprints from the Cretaceous period can be viewed. Another example of the use of petrified wood in construction is the Agate House Pueblo in the Petrified Forest National Park in Arizona. Built by ancestral Pueblo people about 990 years ago, this eight-room building was constructed almost entirely out of petrified wood and is believed to have served as either a family home or meeting place. Artificial petrifaction Scientists attempted to artificially petrify organisms as early as the 18th century, when Girolamo Segato claimed to have supposedly "petrified" human remains. His methods were lost, but the bulk of his "pieces" are on display at the Museum of the Department of Anatomy in Florence, Italy. More recent attempts have been both successful and documented, but should be considered as semi-petrifaction or incomplete petrifaction or at least as producing some novel type of wood composite, as the wood material remains to a certain degree; the constituents of wood (cellulose, lignins, lignans, oleoresins, etc.) have not been replaced by silicate, but have been infiltrated by specially formulated acidic solutions of aluminosilicate salts that gel in contact with wood matter and form a matrix of silicates within the wood after being left to react slowly for a given period of time in the solution or heat-cured for faster results. Hamilton Hicks of Greenwich, Connecticut, received a patent for his "recipe" for rapid artificial petrifaction of wood under US patent 4,612,050 in 1986. Hicks' recipe consists of highly mineralized water and a sodium silicate solution combined with a dilute acid with a pH of 4.0-5.5. Samples of wood are penetrated with this mineral solution through repeated submersion and applications of the solution. Wood treated in this fashion is - according to the claims in the patent - incapable of being burned and acquires the features of petrified wood. Some uses of this product as suggested by Hicks include use by horse breeders who desire fireproof stables constructed of nontoxic material that would also be resistant to chewing of the wood by horses. In 2005 scientists at the Pacific Northwest National Laboratory (PNNL) reported that they had successfully petrified wood samples artificially. Unlike natural petrification, though, they infiltrated samples in acidic solutions, diffused them internally with titanium and carbon and fired them in a high-temperature oven (circa 1400 °C) in an inert atmosphere to yield a man-made ceramic matrix composite of titanium carbide and silicon carbide still showing the initial structure of wood. Future uses would see these artificially petrified wood-ceramic materials eventually replace metal-based superalloys (which are coated with ultrahard ceramics) in the tool industry. Other vegetal matter could be treated in a similar process and yield abrasive powders. See also Girolamo Segato References Botany Fossilization Sedimentary rocks
0.774314
0.994741
0.770242
Usability
Usability can be described as the capacity of a system to provide a condition for its users to perform the tasks safely, effectively, and efficiently while enjoying the experience. In software engineering, usability is the degree to which a software can be used by specified consumers to achieve quantified objectives with effectiveness, efficiency, and satisfaction in a quantified context of use. The object of use can be a software application, website, book, tool, machine, process, vehicle, or anything a human interacts with. A usability study may be conducted as a primary job function by a usability analyst or as a secondary job function by designers, technical writers, marketing personnel, and others. It is widely used in consumer electronics, communication, and knowledge transfer objects (such as a cookbook, a document or online help) and mechanical objects such as a door handle or a hammer. Usability includes methods of measuring usability, such as needs analysis and the study of the principles behind an object's perceived efficiency or elegance. In human-computer interaction and computer science, usability studies the elegance and clarity with which the interaction with a computer program or a web site (web usability) is designed. Usability considers user satisfaction and utility as quality components, and aims to improve user experience through iterative design. Introduction The primary notion of usability is that an object designed with a generalized users' psychology and physiology in mind is, for example: More efficient to use—takes less time to accomplish a particular task Easier to learn—operation can be learned by observing the object More satisfying to use Complex computer systems find their way into everyday life, and at the same time the market is saturated with competing brands. This has made usability more popular and widely recognized in recent years, as companies see the benefits of researching and developing their products with user-oriented methods instead of technology-oriented methods. By understanding and researching the interaction between product and user, the usability expert can also provide insight that is unattainable by traditional company-oriented market research. For example, after observing and interviewing users, the usability expert may identify needed functionality or design flaws that were not anticipated. A method called contextual inquiry does this in the naturally occurring context of the users own environment. In the user-centered design paradigm, the product is designed with its intended users in mind at all times. In the user-driven or participatory design paradigm, some of the users become actual or de facto members of the design team. The term user friendly is often used as a synonym for usable, though it may also refer to accessibility. Usability describes the quality of user experience across websites, software, products, and environments. There is no consensus about the relation of the terms ergonomics (or human factors) and usability. Some think of usability as the software specialization of the larger topic of ergonomics. Others view these topics as tangential, with ergonomics focusing on physiological matters (e.g., turning a door handle) and usability focusing on psychological matters (e.g., recognizing that a door can be opened by turning its handle). Usability is also important in website development (web usability). According to Jakob Nielsen, "Studies of user behavior on the Web find a low tolerance for difficult designs or slow sites. People don't want to wait. And they don't want to learn how to use a home page. There's no such thing as a training class or a manual for a Web site. People have to be able to grasp the functioning of the site immediately after scanning the home page—for a few seconds at most." Otherwise, most casual users simply leave the site and browse or shop elsewhere. Usability can also include the concept of prototypicality, which is how much a particular thing conforms to the expected shared norm, for instance, in website design, users prefer sites that conform to recognised design norms. Definition ISO defines usability as "The extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use." The word "usability" also refers to methods for improving ease-of-use during the design process. Usability consultant Jakob Nielsen and computer science professor Ben Shneiderman have written (separately) about a framework of system acceptability, where usability is a part of "usefulness" and is composed of: Learnability: How easy is it for users to accomplish basic tasks the first time they encounter the design? Efficiency: Once users have learned the design, how quickly can they perform tasks? Memorability: When users return to the design after a period of not using it, how easily can they re-establish proficiency? Errors: How many errors do users make, how severe are these errors, and how easily can they recover from the errors? Satisfaction: How pleasant is it to use the design? Usability is often associated with the functionalities of the product (cf. ISO definition, below), in addition to being solely a characteristic of the user interface (cf. framework of system acceptability, also below, which separates usefulness into usability and utility). For example, in the context of mainstream consumer products, an automobile lacking a reverse gear could be considered unusable according to the former view, and lacking in utility according to the latter view. When evaluating user interfaces for usability, the definition can be as simple as "the perception of a target user of the effectiveness (fit for purpose) and efficiency (work or time required to use) of the Interface". Each component may be measured subjectively against criteria, e.g., Principles of User Interface Design, to provide a metric, often expressed as a percentage. It is important to distinguish between usability testing and usability engineering. Usability testing is the measurement of ease of use of a product or piece of software. In contrast, usability engineering (UE) is the research and design process that ensures a product with good usability. Usability is a non-functional requirement. As with other non-functional requirements, usability cannot be directly measured but must be quantified by means of indirect measures or attributes such as, for example, the number of reported problems with ease-of-use of a system. Intuitive interaction or intuitive use The term intuitive is often listed as a desirable trait in usable interfaces, sometimes used as a synonym for learnable. In the past, Jef Raskin discouraged using this term in user interface design, claiming that easy to use interfaces are often easy because of the user's exposure to previous similar systems, thus the term 'familiar' should be preferred. As an example: Two vertical lines "||" on media player buttons do not intuitively mean "pause"—they do so by convention. This association between intuitive use and familiarity has since been empirically demonstrated in multiple studies by a range of researchers across the world, and intuitive interaction is accepted in the research community as being use of an interface based on past experience with similar interfaces or something else, often not fully conscious, and sometimes involving a feeling of "magic" since the course of the knowledge itself may not be consciously available to the user . Researchers have also investigated intuitive interaction for older people, people living with dementia, and children. Some have argued that aiming for "intuitive" interfaces (based on reusing existing skills with interaction systems) could lead designers to discard a better design solution only because it would require a novel approach and to stick with boring designs. However, applying familiar features into a new interface has been shown not to result in boring design if designers use creative approaches rather than simple copying. The throwaway remark that "the only intuitive interface is the nipple; everything else is learned." is still occasionally mentioned. Any breastfeeding mother or lactation consultant will tell you this is inaccurate and the nipple does in fact require learning on both sides. In 1992, Bruce Tognazzini even denied the existence of "intuitive" interfaces, since such interfaces must be able to intuit, i.e., "perceive the patterns of the user's behavior and draw inferences." Instead, he advocated the term "intuitable," i.e., "that users could intuit the workings of an application by seeing it and using it". However, the term intuitive interaction has become well accepted in the research community over the past 20 or so years and, although not perfect, it should probably be accepted and used. ISO standards ISO/TR 16982:2002 standard ISO/TR 16982:2002 ("Ergonomics of human-system interaction—Usability methods supporting human-centered design") is an International Standards Organization (ISO) standard that provides information on human-centered usability methods that can be used for design and evaluation. It details the advantages, disadvantages, and other factors relevant to using each usability method. It explains the implications of the stage of the life cycle and the individual project characteristics for the selection of usability methods and provides examples of usability methods in context. The main users of ISO/TR 16982:2002 are project managers. It therefore addresses technical human factors and ergonomics issues only to the extent necessary to allow managers to understand their relevance and importance in the design process as a whole. The guidance in ISO/TR 16982:2002 can be tailored for specific design situations by using the lists of issues characterizing the context of use of the product to be delivered. Selection of appropriate usability methods should also take account of the relevant life-cycle process. ISO/TR 16982:2002 is restricted to methods that are widely used by usability specialists and project managers. It does not specify the details of how to implement or carry out the usability methods described. ISO 9241 standard ISO 9241 is a multi-part standard that covers a number of aspects of people working with computers. Although originally titled Ergonomic requirements for office work with visual display terminals (VDTs), it has been retitled to the more generic Ergonomics of Human System Interaction. As part of this change, ISO is renumbering some parts of the standard so that it can cover more topics, e.g. tactile and haptic interaction. The first part to be renumbered was part 10 in 2006, now part 110. IEC 62366 IEC 62366-1:2015 + COR1:2016 & IEC/TR 62366-2 provide guidance on usability engineering specific to a medical device. Designing for usability Any system or device designed for use by people should be easy to use, easy to learn, easy to remember (the instructions), and helpful to users. John Gould and Clayton Lewis recommend that designers striving for usability follow these three design principles Early focus on end users and the tasks they need the system/device to do Empirical measurement using quantitative or qualitative measures Iterative design, in which the designers work in a series of stages, improving the design each time Early focus on users and tasks The design team should be user-driven and it should be in direct contact with potential users. Several evaluation methods, including personas, cognitive modeling, inspection, inquiry, prototyping, and testing methods may contribute to understanding potential users and their perceptions of how well the product or process works. Usability considerations, such as who the users are and their experience with similar systems must be examined. As part of understanding users, this knowledge must "...be played against the tasks that the users will be expected to perform." This includes the analysis of what tasks the users will perform, which are most important, and what decisions the users will make while using your system. Designers must understand how cognitive and emotional characteristics of users will relate to a proposed system. One way to stress the importance of these issues in the designers' minds is to use personas, which are made-up representative users. See below for further discussion of personas. Another more expensive but more insightful method is to have a panel of potential users work closely with the design team from the early stages. Empirical measurement Test the system early on, and test the system on real users using behavioral measurements. This includes testing the system for both learnability and usability. (See Evaluation Methods). It is important in this stage to use quantitative usability specifications such as time and errors to complete tasks and number of users to test, as well as examine performance and attitudes of the users testing the system. Finally, "reviewing or demonstrating" a system before the user tests it can result in misleading results. The emphasis of empirical measurement is on measurement, both informal and formal, which can be carried out through a variety of evaluation methods. Iterative design Iterative design is a design methodology based on a cyclic process of prototyping, testing, analyzing, and refining a product or process. Based on the results of testing the most recent iteration of a design, changes and refinements are made. This process is intended to ultimately improve the quality and functionality of a design. In iterative design, interaction with the designed system is used as a form of research for informing and evolving a project, as successive versions, or iterations of a design are implemented. The key requirements for Iterative Design are: identification of required changes, an ability to make changes, and a willingness to make changes. When a problem is encountered, there is no set method to determine the correct solution. Rather, there are empirical methods that can be used during system development or after the system is delivered, usually a more inopportune time. Ultimately, iterative design works towards meeting goals such as making the system user friendly, easy to use, easy to operate, simple, etc. Evaluation methods There are a variety of usability evaluation methods. Certain methods use data from users, while others rely on usability experts. There are usability evaluation methods for all stages of design and development, from product definition to final design modifications. When choosing a method, consider cost, time constraints, and appropriateness. For a brief overview of methods, see Comparison of usability evaluation methods or continue reading below. Usability methods can be further classified into the subcategories below. Cognitive modeling methods Cognitive modeling involves creating a computational model to estimate how long it takes people to perform a given task. Models are based on psychological principles and experimental studies to determine times for cognitive processing and motor movements. Cognitive models can be used to improve user interfaces or predict problem errors and pitfalls during the design process. A few examples of cognitive models include: Parallel design With parallel design, several people create an initial design from the same set of requirements. Each person works independently, and when finished, shares concepts with the group. The design team considers each solution, and each designer uses the best ideas to further improve their own solution. This process helps generate many different, diverse ideas, and ensures that the best ideas from each design are integrated into the final concept. This process can be repeated several times until the team is satisfied with the final concept. GOMS GOMS stands for goals, operators, methods, and selection rules. It is a family of techniques that analyzes the user complexity of interactive systems. Goals are what the user must accomplish. An operator is an action performed in pursuit of a goal. A method is a sequence of operators that accomplish a goal. Selection rules specify which method satisfies a given goal, based on context. Human processor model Sometimes it is useful to break a task down and analyze each individual aspect separately. This helps the tester locate specific areas for improvement. To do this, it is necessary to understand how the human brain processes information. A model of the human processor is shown below. Many studies have been done to estimate the cycle times, decay times, and capacities of each of these processors. Variables that affect these can include subject age, aptitudes, ability, and the surrounding environment. For a younger adult, reasonable estimates are: Long-term memory is believed to have an infinite capacity and decay time. Keystroke level modeling Keystroke level modeling is essentially a less comprehensive version of GOMS that makes simplifying assumptions in order to reduce calculation time and complexity. Inspection methods These usability evaluation methods involve observation of users by an experimenter, or the testing and evaluation of a program by an expert reviewer. They provide more quantitative data as tasks can be timed and recorded. Card sorts Card sorting is a way to involve users in grouping information for a website's usability review. Participants in a card sorting session are asked to organize the content from a Web site in a way that makes sense to them. Participants review items from a Web site and then group these items into categories. Card sorting helps to learn how users think about the content and how they would organize the information on the Web site. Card sorting helps to build the structure for a Web site, decide what to put on the home page, and label the home page categories. It also helps to ensure that information is organized on the site in a way that is logical to users. Tree tests Tree testing is a way to evaluate the effectiveness of a website's top-down organization. Participants are given "find it" tasks, then asked to drill down through successive text lists of topics and subtopics to find a suitable answer. Tree testing evaluates the findability and labeling of topics in a site, separate from its navigation controls or visual design. Ethnography Ethnographic analysis is derived from anthropology. Field observations are taken at a site of a possible user, which track the artifacts of work such as Post-It notes, items on desktop, shortcuts, and items in trash bins. These observations also gather the sequence of work and interruptions that determine the user's typical day. Heuristic evaluation Heuristic evaluation is a usability engineering method for finding and assessing usability problems in a user interface design as part of an iterative design process. It involves having a small set of evaluators examining the interface and using recognized usability principles (the "heuristics"). It is the most popular of the usability inspection methods, as it is quick, cheap, and easy. Heuristic evaluation was developed to aid in the design of computer user-interface design. It relies on expert reviewers to discover usability problems and then categorize and rate them by a set of principles (heuristics.) It is widely used based on its speed and cost-effectiveness. Jakob Nielsen's list of ten heuristics is the most commonly used in industry. These are ten general principles for user interface design. They are called "heuristics" because they are more in the nature of rules of thumb than specific usability guidelines. Visibility of system status: The system should always keep users informed about what is going on, through appropriate feedback within reasonable time. Match between system and the real world: The system should speak the users' language, with words, phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order. User control and freedom: Users often choose system functions by mistake and will need a clearly marked "emergency exit" to leave the unwanted state without having to go through an extended dialogue. Support undo and redo. Consistency and standards: Users should not have to wonder whether different words, situations, or actions mean the same thing. Follow platform conventions. Error prevention: Even better than good error messages is a careful design that prevents a problem from occurring in the first place. Either eliminate error-prone conditions or check for them and present users with a confirmation option before they commit to the action. Recognition rather than recall: Minimize the user's memory load by making objects, actions, and options visible. The user should not have to remember information from one part of the dialogue to another. Instructions for use of the system should be visible or easily retrievable whenever appropriate. Flexibility and efficiency of use: Accelerators—unseen by the novice user—may often speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users. Allow users to tailor frequent actions. Aesthetic and minimalist design: Dialogues should not contain information that is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility. Help users recognize, diagnose, and recover from errors: Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution. Help and documentation: Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information should be easy to search, focused on the user's task, list concrete steps to be carried out, and not be too large. Thus, by determining which guidelines are violated, the usability of a device can be determined. Usability inspection Usability inspection is a review of a system based on a set of guidelines. The review is conducted by a group of experts who are deeply familiar with the concepts of usability in design. The experts focus on a list of areas in design that have been shown to be troublesome for users. Pluralistic inspection Pluralistic Inspections are meetings where users, developers, and human factors people meet together to discuss and evaluate step by step of a task scenario. As more people inspect the scenario for problems, the higher the probability to find problems. In addition, the more interaction in the team, the faster the usability issues are resolved. Consistency inspection In consistency inspection, expert designers review products or projects to ensure consistency across multiple products to look if it does things in the same way as their own designs. Activity Analysis Activity analysis is a usability method used in preliminary stages of development to get a sense of situation. It involves an investigator observing users as they work in the field. Also referred to as user observation, it is useful for specifying user requirements and studying currently used tasks and subtasks. The data collected are qualitative and useful for defining the problem. It should be used when you wish to frame what is needed, or "What do we want to know?" Inquiry methods The following usability evaluation methods involve collecting qualitative data from users. Although the data collected is subjective, it provides valuable information on what the user wants. Task analysis Task analysis means learning about users' goals and users' ways of working. Task analysis can also mean figuring out what more specific tasks users must do to meet those goals and what steps they must take to accomplish those tasks. Along with user and task analysis, a third analysis is often used: understanding users' environments (physical, social, cultural, and technological environments). Focus groups A focus group is a focused discussion where a moderator leads a group of participants through a set of questions on a particular topic. Although typically used as a marketing tool, focus groups are sometimes used to evaluate usability. Used in the product definition stage, a group of 6 to 10 users are gathered to discuss what they desire in a product. An experienced focus group facilitator is hired to guide the discussion to areas of interest for the developers. Focus groups are typically videotaped to help get verbatim quotes, and clips are often used to summarize opinions. The data gathered is not usually quantitative, but can help get an idea of a target group's opinion. Questionnaires/surveys Surveys have the advantages of being inexpensive, require no testing equipment, and results reflect the users' opinions. When written carefully and given to actual users who have experience with the product and knowledge of design, surveys provide useful feedback on the strong and weak areas of the usability of a design. This is a very common method and often does not appear to be a survey, but just a warranty card. Prototyping methods It is often very difficult for designers to conduct usability tests with the exact system being designed. Cost constraints, size, and design constraints usually lead the designer to creating a prototype of the system. Instead of creating the complete final system, the designer may test different sections of the system, thus making several small models of each component of the system. Prototyping is an attitude and an output, as it is a process for generating and reflecting on tangible ideas by allowing failure to occur early. prototyping helps people to see what could be of communicating a shared vision, and of giving shape to the future. The types of usability prototypes may vary from using paper models, index cards, hand drawn models, or storyboards. Prototypes are able to be modified quickly, often are faster and easier to create with less time invested by designers and are more apt to change design; although sometimes are not an adequate representation of the whole system, are often not durable and testing results may not be parallel to those of the actual system. The Tool Kit Approach This tool kit is a wide library of methods that used the traditional programming language and it is primarily developed for computer programmers. The code created for testing in the tool kit approach can be used in the final product. However, to get the highest benefit from the tool, the user must be an expert programmer. The Parts Kit Approach The two elements of this approach include a parts library and a method used for identifying the connection between the parts.  This approach can be used by almost anyone and it is a great asset for designers with repetitive tasks. Animation Language Metaphor This approach is a combination of the tool kit approach and the part kit approach. Both the dialogue designers and the programmers are able to interact with this prototyping tool. Rapid prototyping Rapid prototyping is a method used in early stages of development to validate and refine the usability of a system. It can be used to quickly and cheaply evaluate user-interface designs without the need for an expensive working model. This can help remove hesitation to change the design, since it is implemented before any real programming begins. One such method of rapid prototyping is paper prototyping. Testing methods These usability evaluation methods involve testing of subjects for the most quantitative data. Usually recorded on video, they provide task completion time and allow for observation of attitude. Regardless to how carefully a system is designed, all theories must be tested using usability tests. Usability tests involve typical users using the system (or product) in a realistic environment [see simulation]. Observation of the user's behavior, emotions, and difficulties while performing different tasks, often identify areas of improvement for the system. Metrics While conducting usability tests, designers must use usability metrics to identify what it is they are going to measure, or the usability metrics. These metrics are often variable, and change in conjunction with the scope and goals of the project. The number of subjects being tested can also affect usability metrics, as it is often easier to focus on specific demographics. Qualitative design phases, such as general usability (can the task be accomplished?), and user satisfaction are also typically done with smaller groups of subjects. Using inexpensive prototypes on small user groups provides more detailed information, because of the more interactive atmosphere, and the designer's ability to focus more on the individual user. As the designs become more complex, the testing must become more formalized. Testing equipment will become more sophisticated and testing metrics become more quantitative. With a more refined prototype, designers often test effectiveness, efficiency, and subjective satisfaction, by asking the user to complete various tasks. These categories are measured by the percent that complete the task, how long it takes to complete the tasks, ratios of success to failure to complete the task, time spent on errors, the number of errors, rating scale of satisfactions, number of times user seems frustrated, etc. Additional observations of the users give designers insight on navigation difficulties, controls, conceptual models, etc. The ultimate goal of analyzing these metrics is to find/create a prototype design that users like and use to successfully perform given tasks. After conducting usability tests, it is important for a designer to record what was observed, in addition to why such behavior occurred and modify the model according to the results. Often it is quite difficult to distinguish the source of the design errors, and what the user did wrong. However, effective usability tests will not generate a solution to the problems, but provide modified design guidelines for continued testing. Remote usability testing Remote usability testing (also known as unmoderated or asynchronous usability testing) involves the use of a specially modified online survey, allowing the quantification of user testing studies by providing the ability to generate large sample sizes, or a deep qualitative analysis without the need for dedicated facilities. Additionally, this style of user testing also provides an opportunity to segment feedback by demographic, attitudinal and behavioral type. The tests are carried out in the user's own environment (rather than labs) helping further simulate real-life scenario testing. This approach also provides a vehicle to easily solicit feedback from users in remote areas. There are two types, quantitative or qualitative. Quantitative use large sample sized and task based surveys. These types of studies are useful for validating suspected usability issues. Qualitative studies are best used as exploratory research, in small sample sizes but frequent, even daily iterations. Qualitative usually allows for observing respondent's screens and verbal think aloud commentary (Screen Recording Video, SRV), and for a richer level of insight also include the webcam view of the respondent (Video-in-Video, ViV, sometimes referred to as Picture-in-Picture, PiP) Remote usability testing for mobile devices The growth in mobile and associated platforms and services (e.g.: Mobile gaming has experienced 20x growth in 2010–2012) has generated a need for unmoderated remote usability testing on mobile devices, both for websites but especially for app interactions. One methodology consists of shipping cameras and special camera holding fixtures to dedicated testers, and having them record the screens of the mobile smart-phone or tablet device, usually using an HD camera. A drawback of this approach is that the finger movements of the respondent can obscure the view of the screen, in addition to the bias and logistical issues inherent in shipping special hardware to selected respondents. A newer approach uses a wireless projection of the mobile device screen onto the computer desktop screen of the respondent, who can then be recorded through their webcam, and thus a combined Video-in-Video view of the participant and the screen interactions viewed simultaneously while incorporating the verbal think aloud commentary of the respondents. Thinking aloud The Think aloud protocol is a method of gathering data that is used in both usability and psychology studies. It involves getting a user to verbalize their thought processes (i.e. expressing their opinions, thoughts, anticipations, and actions) as they perform a task or set of tasks. As a widespread method of usability testing, think aloud provides the researchers with the ability to discover what user really think during task performance and completion. Often an instructor is present to prompt the user into being more vocal as they work. Similar to the Subjects-in-Tandem method, it is useful in pinpointing problems and is relatively simple to set up. Additionally, it can provide insight into the user's attitude, which can not usually be discerned from a survey or questionnaire. RITE method Rapid Iterative Testing and Evaluation (RITE) is an iterative usability method similar to traditional "discount" usability testing. The tester and team must define a target population for testing, schedule participants to come into the lab, decide on how the users behaviors will be measured, construct a test script and have participants engage in a verbal protocol (e.g., think aloud). However it differs from these methods in that it advocates that changes to the user interface are made as soon as a problem is identified and a solution is clear. Sometimes this can occur after observing as few as 1 participant. Once the data for a participant has been collected the usability engineer and team decide if they will be making any changes to the prototype prior to the next participant. The changed interface is then tested with the remaining users. Subjects-in-tandem or co-discovery Subjects-in-tandem (also called co-discovery) is the pairing of subjects in a usability test to gather important information on the ease of use of a product. Subjects tend to discuss the tasks they have to accomplish out loud and through these discussions observers learn where the problem areas of a design are. To encourage co-operative problem-solving between the two subjects, and the attendant discussions leading to it, the tests can be designed to make the subjects dependent on each other by assigning them complementary areas of responsibility (e.g. for testing of software, one subject may be put in charge of the mouse and the other of the keyboard.) Component-based usability testing Component-based usability testing is an approach which aims to test the usability of elementary units of an interaction system, referred to as interaction components. The approach includes component-specific quantitative measures based on user interaction recorded in log files, and component-based usability questionnaires. Other methods Cognitive walkthrough Cognitive walkthrough is a method of evaluating the user interaction of a working prototype or final product. It is used to evaluate the system's ease of learning. Cognitive walkthrough is useful to understand the user's thought processes and decision making when interacting with a system, specially for first-time or infrequent users. Benchmarking Benchmarking creates standardized test materials for a specific type of design. Four key characteristics are considered when establishing a benchmark: time to do the core task, time to fix errors, time to learn applications, and the functionality of the system. Once there is a benchmark, other designs can be compared to it to determine the usability of the system. Many of the common objectives of usability studies, such as trying to understand user behavior or exploring alternative designs, must be put aside. Unlike many other usability methods or types of labs studies, benchmark studies more closely resemble true experimental psychology lab studies, with greater attention to detail on methodology, study protocol and data analysis. Meta-analysis Meta-analysis is a statistical procedure to combine results across studies to integrate the findings. This phrase was coined in 1976 as a quantitative literature review. This type of evaluation is very powerful for determining the usability of a device because it combines multiple studies to provide very accurate quantitative support. Persona Personas are fictitious characters created to represent a site or product's different user types and their associated demographics and technographics. Alan Cooper introduced the concept of using personas as a part of interactive design in 1998 in his book The Inmates Are Running the Asylum, but had used this concept since as early as 1975. Personas are a usability evaluation method that can be used at various design stages. The most typical time to create personas is at the beginning of designing so that designers have a tangible idea of who the users of their product will be. Personas are the archetypes that represent actual groups of users and their needs, which can be a general description of person, context, or usage scenario. This technique turns marketing data on target user population into a few physical concepts of users to create empathy among the design team, with the final aim of tailoring a product more closely to how the personas will use it. To gather the marketing data that personas require, several tools can be used, including online surveys, web analytics, customer feedback forms, and usability tests, and interviews with customer-service representatives. Benefits The key benefits of usability are: Higher revenues through increased sales Increased user efficiency and user satisfaction Reduced development costs Reduced support costs Corporate integration An increase in usability generally positively affects several facets of a company's output quality. In particular, the benefits fall into several common areas: Increased productivity Decreased training and support costs Increased sales and revenues Reduced development time and costs Reduced maintenance costs Increased customer satisfaction Increased usability in the workplace fosters several responses from employees: "Workers who enjoy their work do it better, stay longer in the face of temptation, and contribute ideas and enthusiasm to the evolution of enhanced productivity." To create standards, companies often implement experimental design techniques that create baseline levels. Areas of concern in an office environment include (though are not necessarily limited to): Working posture Design of workstation furniture Screen displays Input devices Organization issues Office environment Software interface By working to improve said factors, corporations can achieve their goals of increased output at lower costs, while potentially creating optimal levels of customer satisfaction. There are numerous reasons why each of these factors correlates to overall improvement. For example, making software user interfaces easier to understand reduces the need for extensive training. The improved interface tends to lower the time needed to perform tasks, and so would both raise the productivity levels for employees and reduce development time (and thus costs). Each of the aforementioned factors are not mutually exclusive; rather they should be understood to work in conjunction to form the overall workplace environment. In the 2010s, usability is recognized as an important software quality attribute, earning its place among more traditional attributes such as performance, robustness and aesthetic appearance. Various academic programs focus on usability. Several usability consultancy companies have emerged, and traditional consultancy and design firms offer similar services. There is some resistance to integrating usability work in organisations. Usability is seen as a vague concept, it is difficult to measure and other areas are prioritised when IT projects run out of time or money. Professional development Usability practitioners are sometimes trained as industrial engineers, psychologists, kinesiologists, systems design engineers, or with a degree in information architecture, information or library science, or Human-Computer Interaction (HCI). More often though they are people who are trained in specific applied fields who have taken on a usability focus within their organization. Anyone who aims to make tools easier to use and more effective for their desired function within the context of work or everyday living can benefit from studying usability principles and guidelines. For those seeking to extend their training, the User Experience Professionals' Association offers online resources, reference lists, courses, conferences, and local chapter meetings. The UXPA also sponsors World Usability Day each November. Related professional organizations include the Human Factors and Ergonomics Society (HFES) and the Association for Computing Machinery's special interest groups in Computer Human Interaction (SIGCHI), Design of Communication (SIGDOC) and Computer Graphics and Interactive Techniques (SIGGRAPH). The Society for Technical Communication also has a special interest group on Usability and User Experience (UUX). They publish a quarterly newsletter called Usability Interface. See also Accessibility Chief experience officer (CXO) Design for All (inclusion) Experience design Fitts's law Form follows function Gemba or customer visit GOMS Gotcha (programming) GUI Human factors Information architecture Interaction design Interactive systems engineering Internationalization Learnability List of human-computer interaction topics List of system quality attributes Machine-Readable Documents Natural mapping (interface design) Non-functional requirement RITE method System Usability Scale Universal usability Usability goals Usability testing Usability engineering User experience User experience design Web usability World Usability Day References Further reading R. G. Bias and D. J. Mayhew (eds) (2005), Cost-Justifying Usability: An Update for the Internet Age, Morgan Kaufmann Donald A. Norman (2013), The Design of Everyday Things, Basic Books, Donald A. Norman (2004), Emotional Design: Why we love (or hate) everyday things, Basic Books, Jakob Nielsen (1994), Usability Engineering, Morgan Kaufmann Publishers, Jakob Nielsen (1994), Usability Inspection Methods, John Wiley & Sons, Ben Shneiderman, Software Psychology, 1980, External links Usability.gov Human–computer interaction Technical communication Information architecture Software quality
0.777011
0.991262
0.770221
Sankey diagram
Sankey diagrams are a data visualisation technique or flow diagram that emphasizes flow/movement/change from one state to another or one time to another, in which the width of the arrows is proportional to the flow rate of the depicted extensive property. Sankey diagrams can also visualize the energy accounts, material flow accounts on a regional or national level, and cost breakdowns. The diagrams are often used in the visualization of material flow analysis. Sankey diagrams emphasize the major transfers or flows within a system. They help locate the most important contributions to a flow. They often show conserved quantities within defined system boundaries. History Sankey diagrams are named after Irish Captain Matthew Henry Phineas Riall Sankey, who used this type of diagram in 1898 in a classic figure (see diagram) showing the energy efficiency of a steam engine. The original charts in black and white displayed just one type of flow (e.g. steam); using colors for different types of flows lets the diagram express additional variables. Over time, it became a standard model used in science and engineering to represent heat balance, energy flows, material flows, and since the 1990s this visual model has been used in life-cycle assessment of products. One of the most famous Sankey diagrams is Charles Minard's Map of Napoleon's Russian Campaign of 1812. It is a flow map, overlaying a Sankey diagram onto a geographical map. It was created in 1869, predating Sankey's first Sankey diagram of 1898. Minard had used this form of diagram for visualising flow of goods and transport of people from at least since 1844. Science Sankey diagrams are often used in fields of science, especially physics. They are used to represent energy inputs, useful output, and wasted output. Active examples The United States Energy Information Administration (EIA) produces numerous Sankey diagrams annually in its Annual Energy Review which illustrate the production and consumption of various forms of energy. The US Department of Energy's Lawrence Livermore Laboratory maintains a site of Sankey diagrams, including US energy flow and carbon flow. Eurostat, the Statistical Office of the European Union, has developed an interactive Sankey web tool to visualise energy data by means of flow diagrams. The tool allows the building and customisation of diagrams by playing with different options (country, year, fuel, level of detail). The International Energy Agency (IEA) created an interactive Sankey web application that details the flow of energy for the entire planet. Users can select specific countries, points of time back to 1973, and modify the arrangement of various flows within the Sankey diagram. See also Alluvial diagrama type of Sankey diagram that uses the same kind of representation to depict how items re-group Material flow management Parallel coordinates Time geography References External links Diagrams Irish inventions British inventions
0.772318
0.997282
0.770219
Anhydrous
A substance is anhydrous if it contains no water. Many processes in chemistry can be impeded by the presence of water; therefore, it is important that water-free reagents and techniques are used. In practice, however, it is very difficult to achieve perfect dryness; anhydrous compounds gradually absorb water from the atmosphere so they must be stored carefully. Solids Many salts and solids can be dried using heat, or under vacuum. Desiccators can also be used to store reagents in dry conditions. Common desiccants include phosphorus pentoxide and silica gel. Chemists may also require dry glassware for sensitive reactions. This can be achieved by drying glassware in an oven, by flame, or under vacuum. Dry solids can be produced by freeze-drying, which is also known as lyophilization. Liquids or solvents In many cases, the presence of water can prevent a reaction from happening, or cause undesirable products to form. To prevent this, anhydrous solvents must be used when performing certain reactions. Examples of reactions requiring the use of anhydrous solvents are the Grignard reaction and the Wurtz reaction. Solvents have typically been dried using distillation or by reaction with reactive metals or metal hydrides. These methods can be dangerous and are a common cause of lab fires. More modern techniques include the use of molecular sieves or a column purification system. Molecular sieves are far more effective than most common methods for drying solvents and are safer and require no special equipment for handling. Column solvent purification devices (generally referred to as Grubb's columns) recently became available, reducing the hazards (water reactive substances, heat) from the classical dehydrating methods. Anhydrous solvents are commercially available from chemical suppliers, and are packaged in sealed containers to maintain dryness. Typically anhydrous solvents will contain approximately 10 ppm of water and will increase in wetness if they are not properly stored. Organic solutions can be dried using a range of drying agents. Typically following a workup the organic extract is dried using magnesium sulfate or a similar drying agent to remove most remaining water. Anhydrous acetic acid is known as glacial acetic acid. Gases Several substances that exist as gases at standard conditions of temperature and pressure are commonly used as concentrated aqueous solutions. To clarify that it is the gaseous form that is being referred to, the term anhydrous is prefixed to the name of the substance: Gaseous ammonia is generally referred to as anhydrous ammonia, to distinguish it from its solution in water, household ammonia solution, also known as ammonium hydroxide. Gaseous hydrogen chloride is generally referred to as anhydrous, to distinguish it from its solution in water, hydrochloric acid. Reactions which produce water can be kept dry using a Dean–Stark apparatus. See also Air-free technique Acidic oxide, a.k.a. acid anhydride Base anhydride Hydrate, a [chemical] substance that contains water or its constituent elements References Chemical properties
0.778503
0.989309
0.77018
Biosemiotics
Biosemiotics (from the Greek βίος bios, "life" and σημειωτικός sēmeiōtikos, "observant of signs") is a field of semiotics and biology that studies the prelinguistic meaning-making, biological interpretation processes, production of signs and codes and communication processes in the biological realm. Biosemiotics integrates the findings of biology and semiotics and proposes a paradigmatic shift in the scientific view of life, in which semiosis (sign process, including meaning and interpretation) is one of its immanent and intrinsic features. The term biosemiotic was first used by Friedrich S. Rothschild in 1962, but Thomas Sebeok, Thure von Uexküll, Jesper Hoffmeyer and many others have implemented the term and field. The field is generally divided between theoretical and applied biosemiotics. Insights from biosemiotics have also been adopted in the humanities and social sciences, including human-animal studies, human-plant studies and cybersemiotics. Definition Biosemiotics is the study of meaning making processes in the living realm, or, to elaborate, a study of signification, communication and habit formation of living processes semiosis (creating and changing sign relations) in living nature the biological basis of all signs and sign interpretation interpretative processes, codes and cognition in organisms Main branches According to the basic types of semiosis under study, biosemiotics can be divided into vegetative semiotics (also endosemiotics, or phytosemiotics), the study of semiosis at the cellular and molecular level (including the translation processes related to genome and the organic form or phenotype); vegetative semiosis occurs in all organisms at their cellular and tissue level; vegetative semiotics includes prokaryote semiotics, sign-mediated interactions in bacteria communities such as quorum sensing and quorum quenching. zoosemiotics or animal semiotics, or the study of animal forms of knowing; animal semiosis occurs in the organisms with neuromuscular system, also includes anthroposemiotics, the study of semiotic behavior in humans. According to the dominant aspect of semiosis under study, the following labels have been used: biopragmatics, biosemantics, and biosyntactics. History Apart from Charles Sanders Peirce (1839–1914) and Charles W. Morris (1903–1979), early pioneers of biosemiotics were Jakob von Uexküll (1864–1944), Heini Hediger (1908–1992), Giorgio Prodi (1928–1987), Marcel Florkin (1900–1979) and Friedrich S. Rothschild (1899–1995); the founding fathers of the contemporary interdiscipline were Thomas Sebeok (1920–2001) and Thure von Uexküll (1908–2004). In the 1980s a circle of mathematicians active in Theoretical Biology, René Thom (Institut des Hautes Etudes Scientifiques), Yannick Kergosien (Dalhousie University and Institut des Hautes Etudes Scientifiques), and Robert Rosen (Dalhousie University, also a former member of the Buffalo group with Howard H. Pattee), explored the relations between Semiotics and Biology using such headings as "Nature Semiotics", "Semiophysics", or "Anticipatory Systems" and taking a modeling approach. The contemporary period (as initiated by Copenhagen-Tartu school) include biologists Jesper Hoffmeyer, Kalevi Kull, Claus Emmeche, Terrence Deacon, semioticians Martin Krampen, Paul Cobley, philosophers Donald Favareau, John Deely, John Collier and complex systems scientists Howard H. Pattee, Michael Conrad, Luis M. Rocha, Cliff Joslyn and León Croizat. In 2001, an annual international conference for biosemiotic research known as the Gatherings in Biosemiotics was inaugurated, and has taken place every year since. In 2004, a group of biosemioticians – Marcello Barbieri, Claus Emmeche, Jesper Hoffmeyer, Kalevi Kull, and Anton Markoš – decided to establish an international journal of biosemiotics. Under their editorship, the Journal of Biosemiotics was launched by Nova Science Publishers in 2005 (two issues published), and with the same five co-editors Biosemiotics was launched by Springer in 2008. The book series Biosemiotics (Springer), edited by Claus Emmeche, Donald Favareau, Kalevi Kull, and Alexei Sharov, began in 2007 and 27 volumes have been published in the series by 2024. The International Society for Biosemiotic Studies was established in 2005 by Donald Favareau and the five editors listed above. A collective programmatic paper on the basic theses of biosemiotics appeared in 2009. and in 2010, an 800 page textbook and anthology, Essential Readings in Biosemiotics, was published, with bibliographies and commentary by Donald Favareau. One of roots for biosemiotics has been medical semiotics. In 2016, Springer published Biosemiotic Medicine: Healing in the World of Meaning, edited by Farzad Goli as part of Studies in Neuroscience, Consciousness and Spirituality. In the humanities Since the work of Jakob von Uexküll and Martin Heidegger, several scholars in the humanities have engaged with or appropriated ideas from biosemiotics in their own projects; conversely, biosemioticians have critically engaged with or reformulated humanistic theories using ideas from biosemiotics and complexity theory. For instance, Andreas Weber has reformulated some of Hans Jonas's ideas using concepts from biosemiotics, and biosemiotics have been used to interpret the poetry of John Burnside. Since 2021, the American philosopher Jason Josephson Storm has drawn on biosemiotics and empirical research on animal communication to propose hylosemiotics, a theory of ontology and communication that Storm believes could allow the humanities to move beyond the linguistic turn. John Deely's work also represents an engagement between humanistic and biosemiotic approaches. Deely was trained as a historian and not a biologist but discussed biosemiotics and zoosemiotics extensively in his introductory works on semiotics and clarified terms that are relevant for biosemiotics. Although his idea of physiosemiotics was criticized by practicing biosemioticians, Paul Cobley, Donald Favareau, and Kalevi Kull wrote that "the debates on this conceptual point between Deely and the biosemiotics community were always civil and marked by a mutual admiration for the contributions of the other towards the advancement of our understanding of sign relations." See also Animal communication Biocommunication (science) Cognitive biology Ecosemiotics Mimicry Naturalization of intentionality Phytosemiotics Plant communication Zoosemiotics References Bibliography Alexander, V. N. (2011). The Biologist's Mistress: Rethinking Self-Organization in Art, Literature and Nature. Litchfield Park AZ: Emergent Publications. Barbieri, Marcello (ed.) (2008). The Codes of Life: The Rules of Macroevolution. Berlin: Springer. Emmeche, Claus; Kull, Kalevi (eds.) (2011). Towards a Semiotic Biology: Life is the Action of Signs. London: Imperial College Press. Emmeche, Claus; Kalevi Kull and Frederik Stjernfelt. (2002): Reading Hoffmeyer, Rethinking Biology. (Tartu Semiotics Library 3). Tartu: Tartu University Press. Favareau, D. (ed.) (2010). Essential Readings in Biosemiotics: Anthology and Commentary. Berlin: Springer. Favareau, D. (2006). The evolutionary history of biosemiotics. In "Introduction to Biosemiotics: The New Biological Synthesis." Marcello Barbieri (Ed.) Berlin: Springer. pp 1–67. Hoffmeyer, Jesper. (1996): Signs of Meaning in the Universe. Bloomington: Indiana University Press. (special issue of Semiotica vol. 120 (no.3-4), 1998, includes 13 reviews of the book and a rejoinder by the author). Hoffmeyer, Jesper (2008). Biosemiotics: An Examination into the Signs of Life and the Life of Signs. Scranton: University of Scranton Press. Hoffmeyer, Jesper (ed.)(2008). A Legacy for Living Systems: Gregory Bateson as a Precursor to Biosemiotics. Berlin: Springer. Hoffmeyer Jesper; Kull, Kalevi (2003): Baldwin and Biosemiotics: What Intelligence Is For. In: Bruce H. Weber and David J. Depew (eds.), Evolution and Learning - The Baldwin Effect Reconsidered'. Cambridge: The MIT Press. Kull, Kalevi, eds. (2001). Jakob von Uexküll: A Paradigm for Biology and Semiotics. Berlin & New York: Mouton de Gruyter. [ = Semiotica vol. 134 (no.1-4)]. Rothschild, Friedrich S. (2000). Creation and Evolution: A Biosemiotic Approach. Edison, New Jersey: Transaction Publishers. Sebeok, Thomas A.; Umiker-Sebeok, Jean (eds.) (1992): Biosemiotics. The Semiotic Web 1991. Berlin and New York: Mouton de Gruyter. Sebeok, Thomas A.; Hoffmeyer, Jesper; Emmeche, Claus (eds.) (1999). Biosemiotica. Berlin & New York: Mouton de Gruyter. [ = Semiotica vol. 127 (no.1-4)]. External links International Society for Biosemiotics Studies, (older version) New Scientist article on Biosemiotics The Biosemiotics website by Alexei Sharov Biosemiotics in Spanish Biosemiotics, introduction (Archive.org archived version) Overview of Gatherings in Biosemiotics The S.E.E.D. Journal (Semiotics, Evolution, Energy, and Development) Jakob von Uexküll Centre Zoosemiotics Home Page Plant cognition Plant communication Semiotics Zoosemiotics
0.782107
0.984719
0.770156
Phase transition
In physics, chemistry, and other related fields like biology, a phase transition (or phase change) is the physical process of transition between one state of a medium and another. Commonly the term is used to refer to changes among the basic states of matter: solid, liquid, and gas, and in rare cases, plasma. A phase of a thermodynamic system and the states of matter have uniform physical properties. During a phase transition of a given medium, certain properties of the medium change as a result of the change of external conditions, such as temperature or pressure. This can be a discontinuous change; for example, a liquid may become gas upon heating to its boiling point, resulting in an abrupt change in volume. The identification of the external conditions at which a transformation occurs defines the phase transition point. Types of phase transition States of matter Phase transitions commonly refer to when a substance transforms between one of the four states of matter to another. At the phase transition point for a substance, for instance the boiling point, the two phases involved - liquid and vapor, have identical free energies and therefore are equally likely to exist. Below the boiling point, the liquid is the more stable state of the two, whereas above the boiling point the gaseous form is the more stable. Common transitions between the solid, liquid, and gaseous phases of a single component, due to the effects of temperature and/or pressure are identified in the following table: For a single component, the most stable phase at different temperatures and pressures can be shown on a phase diagram. Such a diagram usually depicts states in equilibrium. A phase transition usually occurs when the pressure or temperature changes and the system crosses from one region to another, like water turning from liquid to solid as soon as the temperature drops below the freezing point. In exception to the usual case, it is sometimes possible to change the state of a system diabatically (as opposed to adiabatically) in such a way that it can be brought past a phase transition point without undergoing a phase transition. The resulting state is metastable, i.e., less stable than the phase to which the transition would have occurred, but not unstable either. This occurs in superheating and supercooling, for example. Metastable states do not appear on usual phase diagrams. Structural Phase transitions can also occur when a solid changes to a different structure without changing its chemical makeup. In elements, this is known as allotropy, whereas in compounds it is known as polymorphism. The change from one crystal structure to another, from a crystalline solid to an amorphous solid, or from one amorphous structure to another are all examples of solid to solid phase transitions. The martensitic transformation occurs as one of the many phase transformations in carbon steel and stands as a model for displacive phase transformations. Order-disorder transitions such as in alpha-titanium aluminides. As with states of matter, there is also a metastable to equilibrium phase transformation for structural phase transitions. A metastable polymorph which forms rapidly due to lower surface energy will transform to an equilibrium phase given sufficient thermal input to overcome an energetic barrier. Magnetic Phase transitions can also describe the change between different kinds of magnetic ordering. The most well-known is the transition between the ferromagnetic and paramagnetic phases of magnetic materials, which occurs at what is called the Curie point. Another example is the transition between differently ordered, commensurate or incommensurate, magnetic structures, such as in cerium antimonide. A simplified but highly useful model of magnetic phase transitions is provided by the Ising Model Mixtures Phase transitions involving solutions and mixtures are more complicated than transitions involving a single compound. While chemically pure compounds exhibit a single temperature melting point between solid and liquid phases, mixtures can either have a single melting point, known as congruent melting, or they have different liquidus and solidus temperatures resulting in a temperature span where solid and liquid coexist in equilibrium. This is often the case in solid solutions, where the two components are isostructural. There are also a number of phase transitions involving three phases: a eutectic transformation, in which a two-component single-phase liquid is cooled and transforms into two solid phases. The same process, but beginning with a solid instead of a liquid is called a eutectoid transformation. A peritectic transformation, in which a two-component single-phase solid is heated and transforms into a solid phase and a liquid phase. A peritectoid reaction is a peritectoid rection, except involving only solid phases. A monotectic reaction consists of change from a liquid and to a combination of a solid and a second liquid, where the two liquids display a miscibility gap. Separation into multiple phases can occur via spinodal decomposition, in which a single phase is cooled and separates into two different compositions. Non-equilibrium mixtures can occur, such as in supersaturation. Other examples Other phase changes include: Transition to a mesophase between solid and liquid, such as one of the "liquid crystal" phases. The dependence of the adsorption geometry on coverage and temperature, such as for hydrogen on iron (110). The emergence of superconductivity in certain metals and ceramics when cooled below a critical temperature. The emergence of metamaterial properties in artificial photonic media as their parameters are varied.<ref>Eds. Zhou, W., and Fan. S., [https://www.sciencedirect.com/bookseries/semiconductors-and-semimetals/vol/100/suppl/C Semiconductors and Semimetals. Vol 100. Photonic Crystal Metasurface Optoelectronics], Elsevier, 2019</ref> Quantum condensation of bosonic fluids (Bose–Einstein condensation). The superfluid transition in liquid helium is an example of this. The breaking of symmetries in the laws of physics during the early history of the universe as its temperature cooled. Isotope fractionation occurs during a phase transition, the ratio of light to heavy isotopes in the involved molecules changes. When water vapor condenses (an equilibrium fractionation), the heavier water isotopes (18O and 2H) become enriched in the liquid phase while the lighter isotopes (16O and 1H) tend toward the vapor phase. Phase transitions occur when the thermodynamic free energy of a system is non-analytic for some choice of thermodynamic variables (cf. phases). This condition generally stems from the interactions of a large number of particles in a system, and does not appear in systems that are small. Phase transitions can occur for non-thermodynamic systems, where temperature is not a parameter. Examples include: quantum phase transitions, dynamic phase transitions, and topological (structural) phase transitions. In these types of systems other parameters take the place of temperature. For instance, connection probability replaces temperature for percolating networks. Classifications Ehrenfest classification Paul Ehrenfest classified phase transitions based on the behavior of the thermodynamic free energy as a function of other thermodynamic variables. Under this scheme, phase transitions were labeled by the lowest derivative of the free energy that is discontinuous at the transition. First-order phase transitions exhibit a discontinuity in the first derivative of the free energy with respect to some thermodynamic variable. The various solid/liquid/gas transitions are classified as first-order transitions because they involve a discontinuous change in density, which is the (inverse of the) first derivative of the free energy with respect to pressure. Second-order phase transitions are continuous in the first derivative (the order parameter, which is the first derivative of the free energy with respect to the external field, is continuous across the transition) but exhibit discontinuity in a second derivative of the free energy. These include the ferromagnetic phase transition in materials such as iron, where the magnetization, which is the first derivative of the free energy with respect to the applied magnetic field strength, increases continuously from zero as the temperature is lowered below the Curie temperature. The magnetic susceptibility, the second derivative of the free energy with the field, changes discontinuously. Under the Ehrenfest classification scheme, there could in principle be third, fourth, and higher-order phase transitions. For example, the Gross–Witten–Wadia phase transition in 2-d lattice quantum chromodynamics is a third-order phase transition. The Curie points of many ferromagnetics is also a third-order transition, as shown by their specific heat having a sudden change in slope. In practice, only the first- and second-order phase transitions are typically observed. The second-order phase transition was for a while controversial, as it seems to require two sheets of the Gibbs free energy to osculate exactly, which is so unlikely as to never occur in practice. Cornelis Gorter replied the criticism by pointing out that the Gibbs free energy surface might have two sheets on one side, but only one sheet on the other side, creating a forked appearance. ( pp. 146--150) The Ehrenfest classification implicitly allows for continuous phase transformations, where the bonding character of a material changes, but there is no discontinuity in any free energy derivative. An example of this occurs at the supercritical liquid–gas boundaries. The first example of a phase transition which did not fit into the Ehrenfest classification was the exact solution of the Ising model, discovered in 1944 by Lars Onsager. The exact specific heat differed from the earlier mean-field approximations, which had predicted that it has a simple discontinuity at critical temperature. Instead, the exact specific heat had a logarithmic divergence at the critical temperature. In the following decades, the Ehrenfest classification was replaced by a simplified classification scheme that is able to incorporate such transitions. Modern classifications In the modern classification scheme, phase transitions are divided into two broad categories, named similarly to the Ehrenfest classes: First-order phase transitions are those that involve a latent heat. During such a transition, a system either absorbs or releases a fixed (and typically large) amount of energy per volume. During this process, the temperature of the system will stay constant as heat is added: the system is in a "mixed-phase regime" in which some parts of the system have completed the transition and others have not.Faghri, A., and Zhang, Y., Fundamentals of Multiphase Heat Transfer and Flow, Springer, New York, NY, 2020 Familiar examples are the melting of ice or the boiling of water (the water does not instantly turn into vapor, but forms a turbulent mixture of liquid water and vapor bubbles). Yoseph Imry and Michael Wortis showed that quenched disorder can broaden a first-order transition. That is, the transformation is completed over a finite range of temperatures, but phenomena like supercooling and superheating survive and hysteresis is observed on thermal cycling. s are also called "continuous phase transitions". They are characterized by a divergent susceptibility, an infinite correlation length, and a power law decay of correlations near criticality. Examples of second-order phase transitions are the ferromagnetic transition, superconducting transition (for a Type-I superconductor the phase transition is second-order at zero external field and for a Type-II superconductor the phase transition is second-order for both normal-state–mixed-state and mixed-state–superconducting-state transitions) and the superfluid transition. In contrast to viscosity, thermal expansion and heat capacity of amorphous materials show a relatively sudden change at the glass transition temperature which enables accurate detection using differential scanning calorimetry measurements. Lev Landau gave a phenomenological theory of second-order phase transitions. Apart from isolated, simple phase transitions, there exist transition lines as well as multicritical points, when varying external parameters like the magnetic field or composition. Several transitions are known as infinite-order phase transitions. They are continuous but break no symmetries. The most famous example is the Kosterlitz–Thouless transition in the two-dimensional XY model. Many quantum phase transitions, e.g., in two-dimensional electron gases, belong to this class. The liquid–glass transition is observed in many polymers and other liquids that can be supercooled far below the melting point of the crystalline phase. This is atypical in several respects. It is not a transition between thermodynamic ground states: it is widely believed that the true ground state is always crystalline. Glass is a quenched disorder state, and its entropy, density, and so on, depend on the thermal history. Therefore, the glass transition is primarily a dynamic phenomenon: on cooling a liquid, internal degrees of freedom successively fall out of equilibrium. Some theoretical methods predict an underlying phase transition in the hypothetical limit of infinitely long relaxation times. No direct experimental evidence supports the existence of these transitions. Characteristic properties Phase coexistence A disorder-broadened first-order transition occurs over a finite range of temperatures where the fraction of the low-temperature equilibrium phase grows from zero to one (100%) as the temperature is lowered. This continuous variation of the coexisting fractions with temperature raised interesting possibilities. On cooling, some liquids vitrify into a glass rather than transform to the equilibrium crystal phase. This happens if the cooling rate is faster than a critical cooling rate, and is attributed to the molecular motions becoming so slow that the molecules cannot rearrange into the crystal positions. This slowing down happens below a glass-formation temperature Tg, which may depend on the applied pressure. If the first-order freezing transition occurs over a range of temperatures, and Tg falls within this range, then there is an interesting possibility that the transition is arrested when it is partial and incomplete. Extending these ideas to first-order magnetic transitions being arrested at low temperatures, resulted in the observation of incomplete magnetic transitions, with two magnetic phases coexisting, down to the lowest temperature. First reported in the case of a ferromagnetic to anti-ferromagnetic transition, such persistent phase coexistence has now been reported across a variety of first-order magnetic transitions. These include colossal-magnetoresistance manganite materials, magnetocaloric materials, magnetic shape memory materials, and other materials. The interesting feature of these observations of Tg falling within the temperature range over which the transition occurs is that the first-order magnetic transition is influenced by magnetic field, just like the structural transition is influenced by pressure. The relative ease with which magnetic fields can be controlled, in contrast to pressure, raises the possibility that one can study the interplay between Tg and Tc in an exhaustive way. Phase coexistence across first-order magnetic transitions will then enable the resolution of outstanding issues in understanding glasses. Critical points In any system containing liquid and gaseous phases, there exists a special combination of pressure and temperature, known as the critical point, at which the transition between liquid and gas becomes a second-order transition. Near the critical point, the fluid is sufficiently hot and compressed that the distinction between the liquid and gaseous phases is almost non-existent. This is associated with the phenomenon of critical opalescence, a milky appearance of the liquid due to density fluctuations at all possible wavelengths (including those of visible light). Symmetry Phase transitions often involve a symmetry breaking process. For instance, the cooling of a fluid into a crystalline solid breaks continuous translation symmetry: each point in the fluid has the same properties, but each point in a crystal does not have the same properties (unless the points are chosen from the lattice points of the crystal lattice). Typically, the high-temperature phase contains more symmetries than the low-temperature phase due to spontaneous symmetry breaking, with the exception of certain accidental symmetries (e.g. the formation of heavy virtual particles, which only occurs at low temperatures). Order parameters An order parameter is a measure of the degree of order across the boundaries in a phase transition system; it normally ranges between zero in one phase (usually above the critical point) and nonzero in the other. At the critical point, the order parameter susceptibility will usually diverge. An example of an order parameter is the net magnetization in a ferromagnetic system undergoing a phase transition. For liquid/gas transitions, the order parameter is the difference of the densities. From a theoretical perspective, order parameters arise from symmetry breaking. When this happens, one needs to introduce one or more extra variables to describe the state of the system. For example, in the ferromagnetic phase, one must provide the net magnetization, whose direction was spontaneously chosen when the system cooled below the Curie point. However, note that order parameters can also be defined for non-symmetry-breaking transitions. Some phase transitions, such as superconducting and ferromagnetic, can have order parameters for more than one degree of freedom. In such phases, the order parameter may take the form of a complex number, a vector, or even a tensor, the magnitude of which goes to zero at the phase transition. There also exist dual descriptions of phase transitions in terms of disorder parameters. These indicate the presence of line-like excitations such as vortex- or defect lines. Relevance in cosmology Symmetry-breaking phase transitions play an important role in cosmology. As the universe expanded and cooled, the vacuum underwent a series of symmetry-breaking phase transitions. For example, the electroweak transition broke the SU(2)×U(1) symmetry of the electroweak field into the U(1) symmetry of the present-day electromagnetic field. This transition is important to explain the asymmetry between the amount of matter and antimatter in the present-day universe, according to electroweak baryogenesis theory. Progressive phase transitions in an expanding universe are implicated in the development of order in the universe, as is illustrated by the work of Eric Chaisson and David Layzer. See also relational order theories and order and disorder. Critical exponents and universality classes Continuous phase transitions are easier to study than first-order transitions due to the absence of latent heat, and they have been discovered to have many interesting properties. The phenomena associated with continuous phase transitions are called critical phenomena, due to their association with critical points. Continuous phase transitions can be characterized by parameters known as critical exponents. The most important one is perhaps the exponent describing the divergence of the thermal correlation length by approaching the transition. For instance, let us examine the behavior of the heat capacity near such a transition. We vary the temperature T of the system while keeping all the other thermodynamic variables fixed and find that the transition occurs at some critical temperature Tc. When T is near Tc, the heat capacity C typically has a power law behavior: The heat capacity of amorphous materials has such a behaviour near the glass transition temperature where the universal critical exponent α = 0.59 A similar behavior, but with the exponent ν instead of α, applies for the correlation length. The exponent ν is positive. This is different with α. Its actual value depends on the type of phase transition we are considering. The critical exponents are not necessarily the same above and below the critical temperature. When a continuous symmetry is explicitly broken down to a discrete symmetry by irrelevant (in the renormalization group sense) anisotropies, then some exponents (such as , the exponent of the susceptibility) are not identical. For −1 < α < 0, the heat capacity has a "kink" at the transition temperature. This is the behavior of liquid helium at the lambda transition from a normal state to the superfluid state, for which experiments have found α = −0.013 ± 0.003. At least one experiment was performed in the zero-gravity conditions of an orbiting satellite to minimize pressure differences in the sample. This experimental value of α agrees with theoretical predictions based on variational perturbation theory. For 0 < α < 1, the heat capacity diverges at the transition temperature (though, since α < 1, the enthalpy stays finite). An example of such behavior is the 3D ferromagnetic phase transition. In the three-dimensional Ising model for uniaxial magnets, detailed theoretical studies have yielded the exponent α ≈ +0.110. Some model systems do not obey a power-law behavior. For example, mean field theory predicts a finite discontinuity of the heat capacity at the transition temperature, and the two-dimensional Ising model has a logarithmic divergence. However, these systems are limiting cases and an exception to the rule. Real phase transitions exhibit power-law behavior. Several other critical exponents, β, γ, δ, ν, and η, are defined, examining the power law behavior of a measurable physical quantity near the phase transition. Exponents are related by scaling relations, such as It can be shown that there are only two independent exponents, e.g. ν and η. It is a remarkable fact that phase transitions arising in different systems often possess the same set of critical exponents. This phenomenon is known as universality. For example, the critical exponents at the liquid–gas critical point have been found to be independent of the chemical composition of the fluid. More impressively, but understandably from above, they are an exact match for the critical exponents of the ferromagnetic phase transition in uniaxial magnets. Such systems are said to be in the same universality class. Universality is a prediction of the renormalization group theory of phase transitions, which states that the thermodynamic properties of a system near a phase transition depend only on a small number of features, such as dimensionality and symmetry, and are insensitive to the underlying microscopic properties of the system. Again, the divergence of the correlation length is the essential point. Critical phenomena There are also other critical phenomena; e.g., besides static functions there is also critical dynamics. As a consequence, at a phase transition one may observe critical slowing down or speeding up. Connected to the previous phenomenon is also the phenomenon of enhanced fluctuations before the phase transition, as a consequence of lower degree of stability of the initial phase of the system. The large static universality classes of a continuous phase transition split into smaller dynamic universality classes. In addition to the critical exponents, there are also universal relations for certain static or dynamic functions of the magnetic fields and temperature differences from the critical value. Phase transitions in biological systems Phase transitions play many important roles in biological systems. Examples include the lipid bilayer formation, the coil-globule transition in the process of protein folding and DNA melting, liquid crystal-like transitions in the process of DNA condensation, and cooperative ligand binding to DNA and proteins with the character of phase transition. In biological membranes, gel to liquid crystalline phase transitions play a critical role in physiological functioning of biomembranes. In gel phase, due to low fluidity of membrane lipid fatty-acyl chains, membrane proteins have restricted movement and thus are restrained in exercise of their physiological role. Plants depend critically on photosynthesis by chloroplast thylakoid membranes which are exposed cold environmental temperatures. Thylakoid membranes retain innate fluidity even at relatively low temperatures because of high degree of fatty-acyl disorder allowed by their high content of linolenic acid, 18-carbon chain with 3-double bonds. Gel-to-liquid crystalline phase transition temperature of biological membranes can be determined by many techniques including calorimetry, fluorescence, spin label electron paramagnetic resonance and NMR by recording measurements of the concerned parameter by at series of sample temperatures. A simple method for its determination from 13-C NMR line intensities has also been proposed. It has been proposed that some biological systems might lie near critical points. Examples include neural networks in the salamander retina, bird flocks gene expression networks in Drosophila, and protein folding. However, it is not clear whether or not alternative reasons could explain some of the phenomena supporting arguments for criticality. It has also been suggested that biological organisms share two key properties of phase transitions: the change of macroscopic behavior and the coherence of a system at a critical point. Phase transitions are prominent feature of motor behavior in biological systems. Spontaneous gait transitions, as well as fatigue-induced motor task disengagements, show typical critical behavior as an intimation of the sudden qualitative change of the previously stable motor behavioral pattern. The characteristic feature of second order phase transitions is the appearance of fractals in some scale-free properties. It has long been known that protein globules are shaped by interactions with water. There are 20 amino acids that form side groups on protein peptide chains range from hydrophilic to hydrophobic, causing the former to lie near the globular surface, while the latter lie closer to the globular center. Twenty fractals were discovered in solvent associated surface areas of > 5000 protein segments. The existence of these fractals proves that proteins function near critical points of second-order phase transitions. In groups of organisms in stress (when approaching critical transitions), correlations tend to increase, while at the same time, fluctuations also increase. This effect is supported by many experiments and observations of groups of people, mice, trees, and grassy plants. Phase transitions in social systems Phase transitions have been hypothesised to occur in social systems viewed as dynamical systems. A hypothesis proposed in the 1990s and 2000s in the context of peace and armed conflict is that when a conflict that is non-violent shifts to a phase of armed conflict, this is a phase transition from latent to manifest phases within the dynamical system. Experimental A variety of methods are applied for studying the various effects. Selected examples are: Hall effect (measurement of magnetic transitions) Mössbauer spectroscopy (simultaneous measurement of magnetic and non-magnetic transitions. Limited up to about 800–1000 °C) Neutron diffraction Perturbed angular correlation (simultaneous measurement of magnetic and non-magnetic transitions. No temperature limits. Over 2000 °C already performed, theoretical possible up to the highest crystal material, such as tantalum hafnium carbide 4215 °C.) Raman Spectroscopy SQUID (measurement of magnetic transitions) Thermogravimetry (very common) X-ray diffraction See also of second order phase transitions References Further reading Anderson, P.W., Basic Notions of Condensed Matter Physics, Perseus Publishing (1997). Faghri, A., and Zhang, Y., Fundamentals of Multiphase Heat Transfer and Flow, Springer Nature Switzerland AG, 2020. Goldenfeld, N., Lectures on Phase Transitions and the Renormalization Group, Perseus Publishing (1992). M.R. Khoshbin-e-Khoshnazar, Ice Phase Transition as a sample of finite system phase transition, (Physics Education (India) Volume 32. No. 2, Apr - Jun 2016) Kleinert, H., Gauge Fields in Condensed Matter, Vol. I, "Superfluidity and Vortex lines; Disorder Fields, Phase Transitions", pp. 1–742, World Scientific (Singapore, 1989); Paperback (physik.fu-berlin.de readable online) (readable online). Krieger, Martin H., Constitutions of matter : mathematically modelling the most everyday of physical phenomena, University of Chicago Press, 1996. Contains a detailed pedagogical discussion of Onsager's solution of the 2-D Ising Model. Landau, L.D. and Lifshitz, E.M., Statistical Physics Part 1, vol. 5 of Course of Theoretical Physics, Pergamon Press, 3rd Ed. (1994). Mussardo G., "Statistical Field Theory. An Introduction to Exactly Solved Models of Statistical Physics", Oxford University Press, 2010. Schroeder, Manfred R., Fractals, chaos, power laws : minutes from an infinite paradise, New York: W. H. Freeman, 1991. Very well-written book in "semi-popular" style—not a textbook—aimed at an audience with some training in mathematics and the physical sciences. Explains what scaling in phase transitions is all about, among other things. H. E. Stanley, Introduction to Phase Transitions and Critical Phenomena (Oxford University Press, Oxford and New York 1971). Yeomans J. M., Statistical Mechanics of Phase Transitions'', Oxford University Press, 1992. External links Interactive Phase Transitions on lattices with Java applets Universality classes from Sklogwiki Physical phenomena Critical phenomena
0.772371
0.997106
0.770136
Endocrinology
Endocrinology (from endocrine + -ology) is a branch of biology and medicine dealing with the endocrine system, its diseases, and its specific secretions known as hormones. It is also concerned with the integration of developmental events proliferation, growth, and differentiation, and the psychological or behavioral activities of metabolism, growth and development, tissue function, sleep, digestion, respiration, excretion, mood, stress, lactation, movement, reproduction, and sensory perception caused by hormones. Specializations include behavioral endocrinology and comparative endocrinology. The endocrine system consists of several glands, all in different parts of the body, that secrete hormones directly into the blood rather than into a duct system. Therefore, endocrine glands are regarded as ductless glands. Hormones have many different functions and modes of action; one hormone may have several effects on different target organs, and, conversely, one target organ may be affected by more than one hormone. The endocrine system Endocrinology is the study of the endocrine system in the human body. This is a system of glands which secrete hormones. Hormones are chemicals that affect the actions of different organ systems in the body. Examples include thyroid hormone, growth hormone, and insulin. The endocrine system involves a number of feedback mechanisms, so that often one hormone (such as thyroid stimulating hormone) will control the action or release of another secondary hormone (such as thyroid hormone). If there is too much of the secondary hormone, it may provide negative feedback to the primary hormone, maintaining homeostasis. In the original 1902 definition by Bayliss and Starling (see below), they specified that, to be classified as a hormone, a chemical must be produced by an organ, be released (in small amounts) into the blood, and be transported by the blood to a distant organ to exert its specific function. This definition holds for most "classical" hormones, but there are also paracrine mechanisms (chemical communication between cells within a tissue or organ), autocrine signals (a chemical that acts on the same cell), and intracrine signals (a chemical that acts within the same cell). A neuroendocrine signal is a "classical" hormone that is released into the blood by a neurosecretory neuron (see article on neuroendocrinology). Hormones Griffin and Ojeda identify three different classes of hormones based on their chemical composition: Amines Amines, such as norepinephrine, epinephrine, and dopamine (catecholamines), are derived from single amino acids, in this case tyrosine. Thyroid hormones such as 3,5,3'-triiodothyronine (T3) and 3,5,3',5'-tetraiodothyronine (thyroxine, T4) make up a subset of this class because they derive from the combination of two iodinated tyrosine amino acid residues. Peptide and protein Peptide hormones and protein hormones consist of three (in the case of thyrotropin-releasing hormone) to more than 200 (in the case of follicle-stimulating hormone) amino acid residues and can have a molecular mass as large as 31,000 grams per mole. All hormones secreted by the pituitary gland are peptide hormones, as are leptin from adipocytes, ghrelin from the stomach, and insulin from the pancreas. Steroid Steroid hormones are converted from their parent compound, cholesterol. Mammalian steroid hormones can be grouped into five groups by the receptors to which they bind: glucocorticoids, mineralocorticoids, androgens, estrogens, and progestogens. Some forms of vitamin D, such as calcitriol, are steroid-like and bind to homologous receptors, but lack the characteristic fused ring structure of true steroids. As a profession Although every organ system secretes and responds to hormones (including the brain, lungs, heart, intestine, skin, and the kidneys), the clinical specialty of endocrinology focuses primarily on the endocrine organs, meaning the organs whose primary function is hormone secretion. These organs include the pituitary, thyroid, adrenals, ovaries, testes, and pancreas. An endocrinologist is a physician who specializes in treating disorders of the endocrine system, such as diabetes, hyperthyroidism, and many others (see list of diseases). Work The medical specialty of endocrinology involves the diagnostic evaluation of a wide variety of symptoms and variations and the long-term management of disorders of deficiency or excess of one or more hormones. The diagnosis and treatment of endocrine diseases are guided by laboratory tests to a greater extent than for most specialties. Many diseases are investigated through excitation/stimulation or inhibition/suppression testing. This might involve injection with a stimulating agent to test the function of an endocrine organ. Blood is then sampled to assess the changes of the relevant hormones or metabolites. An endocrinologist needs extensive knowledge of clinical chemistry and biochemistry to understand the uses and limitations of the investigations. A second important aspect of the practice of endocrinology is distinguishing human variation from disease. Atypical patterns of physical development and abnormal test results must be assessed as indicative of disease or not. Diagnostic imaging of endocrine organs may reveal incidental findings called incidentalomas, which may or may not represent disease. Endocrinology involves caring for the person as well as the disease. Most endocrine disorders are chronic diseases that need lifelong care. Some of the most common endocrine diseases include diabetes mellitus, hypothyroidism and the metabolic syndrome. Care of diabetes, obesity and other chronic diseases necessitates understanding the patient at the personal and social level as well as the molecular, and the physician–patient relationship can be an important therapeutic process. Apart from treating patients, many endocrinologists are involved in clinical science and medical research, teaching, and hospital management. Training Endocrinologists are specialists of internal medicine or pediatrics. Reproductive endocrinologists deal primarily with problems of fertility and menstrual function—often training first in obstetrics. Most qualify as an internist, pediatrician, or gynecologist for a few years before specializing, depending on the local training system. In the U.S. and Canada, training for board certification in internal medicine, pediatrics, or gynecology after medical school is called residency. Further formal training to subspecialize in adult, pediatric, or reproductive endocrinology is called a fellowship. Typical training for a North American endocrinologist involves 4 years of college, 4 years of medical school, 3 years of residency, and 2 years of fellowship. In the US, adult endocrinologists are board certified by the American Board of Internal Medicine (ABIM) or the American Osteopathic Board of Internal Medicine (AOBIM) in Endocrinology, Diabetes and Metabolism. Diseases treated by endocrinologists Diabetes mellitus: This is a chronic condition that affects how your body regulates blood sugar. There are two main types: type 1 diabetes, which is an autoimmune disease that occurs when the body attacks the cells that produce insulin, and type 2 diabetes, which is a condition in which the body either doesn't produce enough insulin or doesn't use it effectively. Thyroid disorders: These are conditions that affect the thyroid gland, a butterfly-shaped gland located in the front of your neck. The thyroid gland produces hormones that regulate your metabolism, heart rate, and body temperature. Common thyroid disorders include hyperthyroidism (overactive thyroid) and hypothyroidism (underactive thyroid). Adrenal disorders: The adrenal glands are located on top of your kidneys. They produce hormones that help regulate blood pressure, blood sugar, and the body's response to stress. Common adrenal disorders include Cushing syndrome (excess cortisol production) and Addison's disease (adrenal insufficiency). Pituitary disorders: The pituitary gland is a pea-sized gland located at the base of the brain. It produces hormones that control many other hormone-producing glands in the body. Common pituitary disorders include acromegaly (excess growth hormone production) and Cushing's disease (excess ACTH production). Metabolic disorders: These are conditions that affect how your body processes food into energy. Common metabolic disorders include obesity, high cholesterol, and gout. Calcium and bone disorders: Endocrinologists also treat conditions that affect calcium levels in the blood, such as hyperparathyroidism (too much parathyroid hormone) and osteoporosis (weak bones). Sexual and reproductive disorders: Endocrinologists can also help diagnose and treat hormonal problems that affect sexual development and function, such as polycystic ovary syndrome (PCOS) and erectile dysfunction. Endocrine cancers: These are cancers that develop in the endocrine glands. Endocrinologists can help diagnose and treat these cancers. Diseases and medicine Diseases See main article at Endocrine diseases Endocrinology also involves the study of the diseases of the endocrine system. These diseases may relate to too little or too much secretion of a hormone, too little or too much action of a hormone, or problems with receiving the hormone. Societies and Organizations Because endocrinology encompasses so many conditions and diseases, there are many organizations that provide education to patients and the public. The Hormone Foundation is the public education affiliate of The Endocrine Society and provides information on all endocrine-related conditions. Other educational organizations that focus on one or more endocrine-related conditions include the American Diabetes Association, Human Growth Foundation, American Menopause Foundation, Inc., and American Thyroid Association. In North America the principal professional organizations of endocrinologists include The Endocrine Society, the American Association of Clinical Endocrinologists, the American Diabetes Association, the Lawson Wilkins Pediatric Endocrine Society, and the American Thyroid Association. In Europe, the European Society of Endocrinology (ESE) and the European Society for Paediatric Endocrinology (ESPE) are the main organisations representing professionals in the fields of adult and paediatric endocrinology, respectively. In the United Kingdom, the Society for Endocrinology and the British Society for Paediatric Endocrinology and Diabetes are the main professional organisations. The European Society for Paediatric Endocrinology is the largest international professional association dedicated solely to paediatric endocrinology. There are numerous similar associations around the world. History The earliest study of endocrinology began in China. The Chinese were isolating sex and pituitary hormones from human urine and using them for medicinal purposes by 200 BC. They used many complex methods, such as sublimation of steroid hormones. Another method specified by Chinese texts—the earliest dating to 1110—specified the use of saponin (from the beans of Gleditsia sinensis) to extract hormones, but gypsum (containing calcium sulfate) was also known to have been used. Although most of the relevant tissues and endocrine glands had been identified by early anatomists, a more humoral approach to understanding biological function and disease was favoured by the ancient Greek and Roman thinkers such as Aristotle, Hippocrates, Lucretius, Celsus, and Galen, according to Freeman et al., and these theories held sway until the advent of germ theory, physiology, and organ basis of pathology in the 19th century. In 1849, Arnold Berthold noted that castrated cockerels did not develop combs and wattles or exhibit overtly male behaviour. He found that replacement of testes back into the abdominal cavity of the same bird or another castrated bird resulted in normal behavioural and morphological development, and he concluded (erroneously) that the testes secreted a substance that "conditioned" the blood that, in turn, acted on the body of the cockerel. In fact, one of two other things could have been true: that the testes modified or activated a constituent of the blood or that the testes removed an inhibitory factor from the blood. It was not proven that the testes released a substance that engenders male characteristics until it was shown that the extract of testes could replace their function in castrated animals. Pure, crystalline testosterone was isolated in 1935. Graves' disease was named after Irish doctor Robert James Graves, who described a case of goiter with exophthalmos in 1835. The German Karl Adolph von Basedow also independently reported the same constellation of symptoms in 1840, while earlier reports of the disease were also published by the Italians Giuseppe Flajani and Antonio Giuseppe Testa, in 1802 and 1810 respectively, and by the English physician Caleb Hillier Parry (a friend of Edward Jenner) in the late 18th century. Thomas Addison was first to describe Addison's disease in 1849. In 1902 William Bayliss and Ernest Starling performed an experiment in which they observed that acid instilled into the duodenum caused the pancreas to begin secretion, even after they had removed all nervous connections between the two. The same response could be produced by injecting extract of jejunum mucosa into the jugular vein, showing that some factor in the mucosa was responsible. They named this substance "secretin" and coined the term hormone for chemicals that act in this way. Joseph von Mering and Oskar Minkowski made the observation in 1889 that removing the pancreas surgically led to an increase in blood sugar, followed by a coma and eventual death—symptoms of diabetes mellitus. In 1922, Banting and Best realized that homogenizing the pancreas and injecting the derived extract reversed this condition. Neurohormones were first identified by Otto Loewi in 1921. He incubated a frog's heart (innervated with its vagus nerve attached) in a saline bath, and left in the solution for some time. The solution was then used to bathe a non-innervated second heart. If the vagus nerve on the first heart was stimulated, negative inotropic (beat amplitude) and chronotropic (beat rate) activity were seen in both hearts. This did not occur in either heart if the vagus nerve was not stimulated. The vagus nerve was adding something to the saline solution. The effect could be blocked using atropine, a known inhibitor to heart vagal nerve stimulation. Clearly, something was being secreted by the vagus nerve and affecting the heart. The "vagusstuff" (as Loewi called it) causing the myotropic (muscle enhancing) effects was later identified to be acetylcholine and norepinephrine. Loewi won the Nobel Prize for his discovery. Recent work in endocrinology focuses on the molecular mechanisms responsible for triggering the effects of hormones. The first example of such work being done was in 1962 by Earl Sutherland. Sutherland investigated whether hormones enter cells to evoke action, or stayed outside of cells. He studied norepinephrine, which acts on the liver to convert glycogen into glucose via the activation of the phosphorylase enzyme. He homogenized the liver into a membrane fraction and soluble fraction (phosphorylase is soluble), added norepinephrine to the membrane fraction, extracted its soluble products, and added them to the first soluble fraction. Phosphorylase activated, indicating that norepinephrine's target receptor was on the cell membrane, not located intracellularly. He later identified the compound as cyclic AMP (cAMP) and with his discovery created the concept of second-messenger-mediated pathways. He, like Loewi, won the Nobel Prize for his groundbreaking work in endocrinology. See also Comparative endocrinology Endocrine disease Hormone Hormone replacement therapy Neuroendocrinology Pediatric endocrinology Reproductive endocrinology and infertility Wildlife endocrinology List of instruments used in endocrinology References Endocrine system Hormones
0.770733
0.999216
0.770128
Entropy
Entropy is a scientific concept that is most commonly associated with a state of disorder, randomness, or uncertainty. The term and the concept are used in diverse fields, from classical thermodynamics, where it was first recognized, to the microscopic description of nature in statistical physics, and to the principles of information theory. It has found far-ranging applications in chemistry and physics, in biological systems and their relation to life, in cosmology, economics, sociology, weather science, climate change, and information systems including the transmission of information in telecommunication. Entropy is central to the second law of thermodynamics, which states that the entropy of an isolated system left to spontaneous evolution cannot decrease with time. As a result, isolated systems evolve toward thermodynamic equilibrium, where the entropy is highest. A consequence of the second law of thermodynamics is that certain processes are irreversible. The thermodynamic concept was referred to by Scottish scientist and engineer William Rankine in 1850 with the names thermodynamic function and heat-potential. In 1865, German physicist Rudolf Clausius, one of the leading founders of the field of thermodynamics, defined it as the quotient of an infinitesimal amount of heat to the instantaneous temperature. He initially described it as transformation-content, in German Verwandlungsinhalt, and later coined the term entropy from a Greek word for transformation. Austrian physicist Ludwig Boltzmann explained entropy as the measure of the number of possible microscopic arrangements or states of individual atoms and molecules of a system that comply with the macroscopic condition of the system. He thereby introduced the concept of statistical disorder and probability distributions into a new field of thermodynamics, called statistical mechanics, and found the link between the microscopic interactions, which fluctuate about an average configuration, to the macroscopically observable behavior, in form of a simple logarithmic law, with a proportionality constant, the Boltzmann constant, that has become one of the defining universal constants for the modern International System of Units (SI). History In his 1803 paper Fundamental Principles of Equilibrium and Movement, the French mathematician Lazare Carnot proposed that in any machine, the accelerations and shocks of the moving parts represent losses of moment of activity; in any natural process there exists an inherent tendency towards the dissipation of useful energy. In 1824, building on that work, Lazare's son, Sadi Carnot, published Reflections on the Motive Power of Fire, which posited that in all heat-engines, whenever "caloric" (what is now known as heat) falls through a temperature difference, work or motive power can be produced from the actions of its fall from a hot to cold body. He used an analogy with how water falls in a water wheel. That was an early insight into the second law of thermodynamics. Carnot based his views of heat partially on the early 18th-century "Newtonian hypothesis" that both heat and light were types of indestructible forms of matter, which are attracted and repelled by other matter, and partially on the contemporary views of Count Rumford, who showed in 1789 that heat could be created by friction, as when cannon bores are machined. Carnot reasoned that if the body of the working substance, such as a body of steam, is returned to its original state at the end of a complete engine cycle, "no change occurs in the condition of the working body". The first law of thermodynamics, deduced from the heat-friction experiments of James Joule in 1843, expresses the concept of energy and its conservation in all processes; the first law, however, is unsuitable to separately quantify the effects of friction and dissipation. In the 1850s and 1860s, German physicist Rudolf Clausius objected to the supposition that no change occurs in the working body, and gave that change a mathematical interpretation, by questioning the nature of the inherent loss of usable heat when work is done, e.g., heat produced by friction. He described his observations as a dissipative use of energy, resulting in a transformation-content ( in German), of a thermodynamic system or working body of chemical species during a change of state. That was in contrast to earlier views, based on the theories of Isaac Newton, that heat was an indestructible particle that had mass. Clausius discovered that the non-usable energy increases as steam proceeds from inlet to exhaust in a steam engine. From the prefix en-, as in 'energy', and from the Greek word [tropē], which is translated in an established lexicon as turning or change and that he rendered in German as , a word often translated into English as transformation, in 1865 Clausius coined the name of that property as entropy. The word was adopted into the English language in 1868. Later, scientists such as Ludwig Boltzmann, Josiah Willard Gibbs, and James Clerk Maxwell gave entropy a statistical basis. In 1877, Boltzmann visualized a probabilistic way to measure the entropy of an ensemble of ideal gas particles, in which he defined entropy as proportional to the natural logarithm of the number of microstates such a gas could occupy. The proportionality constant in this definition, called the Boltzmann constant, has become one of the defining universal constants for the modern International System of Units (SI). Henceforth, the essential problem in statistical thermodynamics has been to determine the distribution of a given amount of energy E over N identical systems. Constantin Carathéodory, a Greek mathematician, linked entropy with a mathematical definition of irreversibility, in terms of trajectories and integrability. Etymology In 1865, Clausius named the concept of "the differential of a quantity which depends on the configuration of the system", entropy after the Greek word for 'transformation'. He gave "transformational content" as a synonym, paralleling his "thermal and ergonal content" as the name of U, but preferring the term entropy as a close parallel of the word energy, as he found the concepts nearly "analogous in their physical significance". This term was formed by replacing the root of ('ergon', 'work') by that of ('tropy', 'transformation'). In more detail, Clausius explained his choice of "entropy" as a name as follows: I prefer going to the ancient languages for the names of important scientific quantities, so that they may mean the same thing in all living tongues. I propose, therefore, to call S the entropy of a body, after the Greek word "transformation". I have designedly coined the word entropy to be similar to energy, for these two quantities are so analogous in their physical significance, that an analogy of denominations seems to me helpful. Leon Cooper added that in this way "he succeeded in coining a word that meant the same thing to everybody: nothing". Definitions and descriptions The concept of entropy is described by two principal approaches, the macroscopic perspective of classical thermodynamics, and the microscopic description central to statistical mechanics. The classical approach defines entropy in terms of macroscopically measurable physical properties, such as bulk mass, volume, pressure, and temperature. The statistical definition of entropy defines it in terms of the statistics of the motions of the microscopic constituents of a system — modeled at first classically, e.g. Newtonian particles constituting a gas, and later quantum-mechanically (photons, phonons, spins, etc.). The two approaches form a consistent, unified view of the same phenomenon as expressed in the second law of thermodynamics, which has found universal applicability to physical processes. State variables and functions of state Many thermodynamic properties are defined by physical variables that define a state of thermodynamic equilibrium, which essentially are state variables. State variables depend only on the equilibrium condition, not on the path evolution to that state. State variables can be functions of state, also called state functions, in a sense that one state variable is a mathematical function of other state variables. Often, if some properties of a system are determined, they are sufficient to determine the state of the system and thus other properties' values. For example, temperature and pressure of a given quantity of gas determine its state, and thus also its volume via the ideal gas law. A system composed of a pure substance of a single phase at a particular uniform temperature and pressure is determined, and is thus a particular state, and has a particular volume. The fact that entropy is a function of state makes it useful. In the Carnot cycle, the working fluid returns to the same state that it had at the start of the cycle, hence the change or line integral of any state function, such as entropy, over this reversible cycle is zero. Reversible process The entropy change of a system excluding its surroundings can be well-defined as a small portion of heat transferred to the system during reversible process divided by the temperature of the system during this heat transfer:The reversible process is quasistatic (i.e., it occurs without any dissipation, deviating only infinitesimally from the thermodynamic equilibrium), and it may conserve total entropy. For example, in the Carnot cycle, while the heat flow from a hot reservoir to a cold reservoir represents the increase in the entropy in a cold reservoir, the work output, if reversibly and perfectly stored, represents the decrease in the entropy which could be used to operate the heat engine in reverse, returning to the initial state; thus the total entropy change may still be zero at all times if the entire process is reversible. In contrast, irreversible process increases the total entropy of the system and surroundings. Any process that happens quickly enough to deviate from the thermal equilibrium cannot be reversible, the total entropy increases, and the potential for maximum work to be done during the process is lost. Carnot cycle The concept of entropy arose from Rudolf Clausius's study of the Carnot cycle that is a thermodynamic cycle performed by a Carnot heat engine as a reversible heat engine. In a Carnot cycle the heat is transferred from a hot reservoir to a working gas at the constant temperature during isothermal expansion stage and the heat is transferred from a working gas to a cold reservoir at the constant temperature during isothermal compression stage. According to Carnot's theorem, a heat engine with two thermal reservoirs can produce a work if and only if there is a temperature difference between reservoirs. Originally, Carnot did not distinguish between heats and , as he assumed caloric theory to be valid and hence that the total heat in the system was conserved. But in fact, the magnitude of heat is greater than the magnitude of heat . Through the efforts of Clausius and Kelvin, the work done by a reversible heat engine was found to be the product of the Carnot efficiency (i.e., the efficiency of all reversible heat engines with the same pair of thermal reservoirs) and the heat absorbed by a working body of the engine during isothermal expansion:To derive the Carnot efficiency Kelvin had to evaluate the ratio of the work output to the heat absorbed during the isothermal expansion with the help of the Carnot–Clapeyron equation, which contained an unknown function called the Carnot function. The possibility that the Carnot function could be the temperature as measured from a zero point of temperature was suggested by Joule in a letter to Kelvin. This allowed Kelvin to establish his absolute temperature scale. It is known that a work produced by an engine over a cycle equals to a net heat absorbed over a cycle. Thus, with the sign convention for a heat transferred in a thermodynamic process ( for an absorption and for a dissipation) we get:Since this equality holds over an entire Carnot cycle, it gave Clausius the hint that at each stage of the cycle the difference between a work and a net heat would be conserved, rather than a net heat itself. Which means there exists a state function with a change of . It is called an internal energy and forms a central concept for the first law of thermodynamics. Finally, comparison for the both representations of a work output in a Carnot cycle gives us:Similarly to the derivation of internal energy, this equality implies existence of a state function with a change of and which is conserved over an entire cycle. Clausius called this state function entropy. In addition, the total change of entropy in both thermal reservoirs over Carnot cycle is zero too, since the inversion of a heat transfer direction means a sign inversion for the heat transferred during isothermal stages:Here we denote the entropy change for a thermal reservoir by , where is either for a hot reservoir or for a cold one. If we consider a heat engine which is less effective than Carnot cycle (i.e., the work produced by this engine is less than the maximum predicted by Carnot's theorem), its work output is capped by Carnot efficiency as:Substitution of the work as the net heat into the inequality above gives us:or in terms of the entropy change :A Carnot cycle and an entropy as shown above prove to be useful in the study of any classical thermodynamic heat engine: other cycles, such as an Otto, Diesel or Brayton cycle, could be analyzed from the same standpoint. Notably, any machine or cyclic process converting heat into work (i.e., heat engine) what is claimed to produce an efficiency greater than the one of Carnot is not viable — due to violation of the second law of thermodynamics. For further analysis of sufficiently discrete systems, such as an assembly of particles, statistical thermodynamics must be used. Additionally, description of devices operating near limit of de Broglie waves, e.g. photovoltaic cells, have to be consistent with quantum statistics. Classical thermodynamics The thermodynamic definition of entropy was developed in the early 1850s by Rudolf Clausius and essentially describes how to measure the entropy of an isolated system in thermodynamic equilibrium with its parts. Clausius created the term entropy as an extensive thermodynamic variable that was shown to be useful in characterizing the Carnot cycle. Heat transfer in the isotherm steps (isothermal expansion and isothermal compression) of the Carnot cycle was found to be proportional to the temperature of a system (known as its absolute temperature). This relationship was expressed in an increment of entropy that is equal to incremental heat transfer divided by temperature. Entropy was found to vary in the thermodynamic cycle but eventually returned to the same value at the end of every cycle. Thus it was found to be a function of state, specifically a thermodynamic state of the system. While Clausius based his definition on a reversible process, there are also irreversible processes that change entropy. Following the second law of thermodynamics, entropy of an isolated system always increases for irreversible processes. The difference between an isolated system and closed system is that energy may not flow to and from an isolated system, but energy flow to and from a closed system is possible. Nevertheless, for both closed and isolated systems, and indeed, also in open systems, irreversible thermodynamics processes may occur. According to the Clausius equality, for a reversible cyclic thermodynamic process: which means the line integral is path-independent. Thus we can define a state function , called entropy:Therefore, thermodynamic entropy has the dimension of energy divided by temperature, and the unit joule per kelvin (J/K) in the International System of Units (SI). To find the entropy difference between any two states of the system, the integral must be evaluated for some reversible path between the initial and final states. Since an entropy is a state function, the entropy change of the system for an irreversible path is the same as for a reversible path between the same two states. However, the heat transferred to or from the surroundings is different as well as its entropy change. We can calculate the change of entropy only by integrating the above formula. To obtain the absolute value of the entropy, we consider the third law of thermodynamics: perfect crystals at the absolute zero have an entropy . From a macroscopic perspective, in classical thermodynamics the entropy is interpreted as a state function of a thermodynamic system: that is, a property depending only on the current state of the system, independent of how that state came to be achieved. In any process, where the system gives up of energy to the surrounding at the temperature , its entropy falls by and at least of that energy must be given up to the system's surroundings as a heat. Otherwise, this process cannot go forward. In classical thermodynamics, the entropy of a system is defined if and only if it is in a thermodynamic equilibrium (though a chemical equilibrium is not required: for example, the entropy of a mixture of two moles of hydrogen and one mole of oxygen in standard conditions is well-defined). Statistical mechanics The statistical definition was developed by Ludwig Boltzmann in the 1870s by analyzing the statistical behavior of the microscopic components of the system. Boltzmann showed that this definition of entropy was equivalent to the thermodynamic entropy to within a constant factor—known as the Boltzmann constant. In short, the thermodynamic definition of entropy provides the experimental verification of entropy, while the statistical definition of entropy extends the concept, providing an explanation and a deeper understanding of its nature. The interpretation of entropy in statistical mechanics is the measure of uncertainty, disorder, or mixedupness in the phrase of Gibbs, which remains about a system after its observable macroscopic properties, such as temperature, pressure and volume, have been taken into account. For a given set of macroscopic variables, the entropy measures the degree to which the probability of the system is spread out over different possible microstates. In contrast to the macrostate, which characterizes plainly observable average quantities, a microstate specifies all molecular details about the system including the position and momentum of every molecule. The more such states are available to the system with appreciable probability, the greater the entropy. In statistical mechanics, entropy is a measure of the number of ways a system can be arranged, often taken to be a measure of "disorder" (the higher the entropy, the higher the disorder). This definition describes the entropy as being proportional to the natural logarithm of the number of possible microscopic configurations of the individual atoms and molecules of the system (microstates) that could cause the observed macroscopic state (macrostate) of the system. The constant of proportionality is the Boltzmann constant. The Boltzmann constant, and therefore entropy, have dimensions of energy divided by temperature, which has a unit of joules per kelvin (J⋅K−1) in the International System of Units (or kg⋅m2⋅s−2⋅K−1 in terms of base units). The entropy of a substance is usually given as an intensive property — either entropy per unit mass (SI unit: J⋅K−1⋅kg−1) or entropy per unit amount of substance (SI unit: J⋅K−1⋅mol−1). Specifically, entropy is a logarithmic measure for the system with a number of states, each with a probability of being occupied (usually given by the Boltzmann distribution):where is the Boltzmann constant and the summation is performed over all possible microstates of the system. In case states are defined in a continuous manner, the summation is replaced by an integral over all possible states, or equivalently we can consider the expected value of the logarithm of the probability that a microstate is occupied:This definition assumes the basis states to be picked in a way that there is no information on their relative phases. In a general case the expression is:where is a density matrix, is a trace operator and is a matrix logarithm. Density matrix formalism is not required if the system occurs to be in a thermal equilibrium so long as the basis states are chosen to be eigenstates of Hamiltonian. For most practical purposes it can be taken as the fundamental definition of entropy since all other formulae for can be derived from it, but not vice versa. In what has been called the fundamental postulate in statistical mechanics, among system microstates of the same energy (i.e., degenerate microstates) each microstate is assumed to be populated with equal probability , where is the number of microstates whose energy equals to the one of the system. Usually, this assumption is justified for an isolated system in a thermodynamic equilibrium. Then in case of an isolated system the previous formula reduces to:In thermodynamics, such a system is one with a fixed volume, number of molecules, and internal energy, called a microcanonical ensemble. The most general interpretation of entropy is as a measure of the extent of uncertainty about a system. The equilibrium state of a system maximizes the entropy because it does not reflect all information about the initial conditions, except for the conserved variables. This uncertainty is not of the everyday subjective kind, but rather the uncertainty inherent to the experimental method and interpretative model. The interpretative model has a central role in determining entropy. The qualifier "for a given set of macroscopic variables" above has deep implications when two observers use different sets of macroscopic variables. For example, consider observer A using variables , , and observer B using variables , , , . If observer B changes variable , then observer A will see a violation of the second law of thermodynamics, since he does not possess information about variable and its influence on the system. In other words, one must choose a complete set of macroscopic variables to describe the system, i.e. every independent parameter that may change during experiment. Entropy can also be defined for any Markov processes with reversible dynamics and the detailed balance property. In Boltzmann's 1896 Lectures on Gas Theory, he showed that this expression gives a measure of entropy for systems of atoms and molecules in the gas phase, thus providing a measure for the entropy of classical thermodynamics. Entropy of a system Entropy arises directly from the Carnot cycle. It can also be described as the reversible heat divided by temperature. Entropy is a fundamental function of state. In a thermodynamic system, pressure and temperature tend to become uniform over time because the equilibrium state has higher probability (more possible combinations of microstates) than any other state. As an example, for a glass of ice water in air at room temperature, the difference in temperature between the warm room (the surroundings) and the cold glass of ice and water (the system and not part of the room) decreases as portions of the thermal energy from the warm surroundings spread to the cooler system of ice and water. Over time the temperature of the glass and its contents and the temperature of the room become equal. In other words, the entropy of the room has decreased as some of its energy has been dispersed to the ice and water, of which the entropy has increased. However, as calculated in the example, the entropy of the system of ice and water has increased more than the entropy of the surrounding room has decreased. In an isolated system such as the room and ice water taken together, the dispersal of energy from warmer to cooler always results in a net increase in entropy. Thus, when the "universe" of the room and ice water system has reached a temperature equilibrium, the entropy change from the initial state is at a maximum. The entropy of the thermodynamic system is a measure of how far the equalization has progressed. Thermodynamic entropy is a non-conserved state function that is of great importance in the sciences of physics and chemistry. Historically, the concept of entropy evolved to explain why some processes (permitted by conservation laws) occur spontaneously while their time reversals (also permitted by conservation laws) do not; systems tend to progress in the direction of increasing entropy. For isolated systems, entropy never decreases. This fact has several important consequences in science: first, it prohibits "perpetual motion" machines; and second, it implies the arrow of entropy has the same direction as the arrow of time. Increases in the total entropy of system and surroundings correspond to irreversible changes, because some energy is expended as waste heat, limiting the amount of work a system can do. Unlike many other functions of state, entropy cannot be directly observed but must be calculated. Absolute standard molar entropy of a substance can be calculated from the measured temperature dependence of its heat capacity. The molar entropy of ions is obtained as a difference in entropy from a reference state defined as zero entropy. The second law of thermodynamics states that the entropy of an isolated system must increase or remain constant. Therefore, entropy is not a conserved quantity: for example, in an isolated system with non-uniform temperature, heat might irreversibly flow and the temperature become more uniform such that entropy increases. Chemical reactions cause changes in entropy and system entropy, in conjunction with enthalpy, plays an important role in determining in which direction a chemical reaction spontaneously proceeds. One dictionary definition of entropy is that it is "a measure of thermal energy per unit temperature that is not available for useful work" in a cyclic process. For instance, a substance at uniform temperature is at maximum entropy and cannot drive a heat engine. A substance at non-uniform temperature is at a lower entropy (than if the heat distribution is allowed to even out) and some of the thermal energy can drive a heat engine. A special case of entropy increase, the entropy of mixing, occurs when two or more different substances are mixed. If the substances are at the same temperature and pressure, there is no net exchange of heat or work – the entropy change is entirely due to the mixing of the different substances. At a statistical mechanical level, this results due to the change in available volume per particle with mixing. Equivalence of definitions Proofs of equivalence between the entropy in statistical mechanics — the Gibbs entropy formula:and the entropy in classical thermodynamics:together with the fundamental thermodynamic relation are known for the microcanonical ensemble, the canonical ensemble, the grand canonical ensemble, and the isothermal–isobaric ensemble. These proofs are based on the probability density of microstates of the generalized Boltzmann distribution and the identification of the thermodynamic internal energy as the ensemble average . Thermodynamic relations are then employed to derive the well-known Gibbs entropy formula. However, the equivalence between the Gibbs entropy formula and the thermodynamic definition of entropy is not a fundamental thermodynamic relation but rather a consequence of the form of the generalized Boltzmann distribution. Furthermore, it has been shown that the definitions of entropy in statistical mechanics is the only entropy that is equivalent to the classical thermodynamics entropy under the following postulates: Second law of thermodynamics The second law of thermodynamics requires that, in general, the total entropy of any system does not decrease other than by increasing the entropy of some other system. Hence, in a system isolated from its environment, the entropy of that system tends not to decrease. It follows that heat cannot flow from a colder body to a hotter body without the application of work to the colder body. Secondly, it is impossible for any device operating on a cycle to produce net work from a single temperature reservoir; the production of net work requires flow of heat from a hotter reservoir to a colder reservoir, or a single expanding reservoir undergoing adiabatic cooling, which performs adiabatic work. As a result, there is no possibility of a perpetual motion machine. It follows that a reduction in the increase of entropy in a specified process, such as a chemical reaction, means that it is energetically more efficient. It follows from the second law of thermodynamics that the entropy of a system that is not isolated may decrease. An air conditioner, for example, may cool the air in a room, thus reducing the entropy of the air of that system. The heat expelled from the room (the system), which the air conditioner transports and discharges to the outside air, always makes a bigger contribution to the entropy of the environment than the decrease of the entropy of the air of that system. Thus, the total of entropy of the room plus the entropy of the environment increases, in agreement with the second law of thermodynamics. In mechanics, the second law in conjunction with the fundamental thermodynamic relation places limits on a system's ability to do useful work. The entropy change of a system at temperature absorbing an infinitesimal amount of heat in a reversible way, is given by . More explicitly, an energy is not available to do useful work, where is the temperature of the coldest accessible reservoir or heat sink external to the system. For further discussion, see Exergy. Statistical mechanics demonstrates that entropy is governed by probability, thus allowing for a decrease in disorder even in an isolated system. Although this is possible, such an event has a small probability of occurring, making it unlikely. The applicability of a second law of thermodynamics is limited to systems in or sufficiently near equilibrium state, so that they have defined entropy. Some inhomogeneous systems out of thermodynamic equilibrium still satisfy the hypothesis of local thermodynamic equilibrium, so that entropy density is locally defined as an intensive quantity. For such systems, there may apply a principle of maximum time rate of entropy production. It states that such a system may evolve to a steady state that maximizes its time rate of entropy production. This does not mean that such a system is necessarily always in a condition of maximum time rate of entropy production; it means that it may evolve to such a steady state. Applications The fundamental thermodynamic relation The entropy of a system depends on its internal energy and its external parameters, such as its volume. In the thermodynamic limit, this fact leads to an equation relating the change in the internal energy to changes in the entropy and the external parameters. This relation is known as the fundamental thermodynamic relation. If external pressure bears on the volume as the only external parameter, this relation is:Since both internal energy and entropy are monotonic functions of temperature , implying that the internal energy is fixed when one specifies the entropy and the volume, this relation is valid even if the change from one state of thermal equilibrium to another with infinitesimally larger entropy and volume happens in a non-quasistatic way (so during this change the system may be very far out of thermal equilibrium and then the whole-system entropy, pressure, and temperature may not exist). The fundamental thermodynamic relation implies many thermodynamic identities that are valid in general, independent of the microscopic details of the system. Important examples are the Maxwell relations and the relations between heat capacities. Entropy in chemical thermodynamics Thermodynamic entropy is central in chemical thermodynamics, enabling changes to be quantified and the outcome of reactions predicted. The second law of thermodynamics states that entropy in an isolated system — the combination of a subsystem under study and its surroundings — increases during all spontaneous chemical and physical processes. The Clausius equation introduces the measurement of entropy change which describes the direction and quantifies the magnitude of simple changes such as heat transfer between systems — always from hotter body to cooler one spontaneously. Thermodynamic entropy is an extensive property, meaning that it scales with the size or extent of a system. In many processes it is useful to specify the entropy as an intensive property independent of the size, as a specific entropy characteristic of the type of system studied. Specific entropy may be expressed relative to a unit of mass, typically the kilogram (unit: J⋅kg−1⋅K−1). Alternatively, in chemistry, it is also referred to one mole of substance, in which case it is called the molar entropy with a unit of J⋅mol−1⋅K−1. Thus, when one mole of substance at about is warmed by its surroundings to , the sum of the incremental values of constitute each element's or compound's standard molar entropy, an indicator of the amount of energy stored by a substance at . Entropy change also measures the mixing of substances as a summation of their relative quantities in the final mixture. Entropy is equally essential in predicting the extent and direction of complex chemical reactions. For such applications, must be incorporated in an expression that includes both the system and its surroundings: Via additional steps this expression becomes the equation of Gibbs free energy change for reactants and products in the system at the constant pressure and temperature :where is the enthalpy change and is the entropy change. World's technological capacity to store and communicate entropic information A 2011 study in Science estimated the world's technological capacity to store and communicate optimally compressed information normalized on the most effective compression algorithms available in the year 2007, therefore estimating the entropy of the technologically available sources. The author's estimate that human kind's technological capacity to store information grew from 2.6 (entropically compressed) exabytes in 1986 to 295 (entropically compressed) exabytes in 2007. The world's technological capacity to receive information through one-way broadcast networks was 432 exabytes of (entropically compressed) information in 1986, to 1.9 zettabytes in 2007. The world's effective capacity to exchange information through two-way telecommunication networks was 281 petabytes of (entropically compressed) information in 1986, to 65 (entropically compressed) exabytes in 2007. Entropy balance equation for open systems In chemical engineering, the principles of thermodynamics are commonly applied to "open systems", i.e. those in which heat, work, and mass flow across the system boundary. In general, flow of heat , flow of shaft work and pressure-volume work across the system boundaries cause changes in the entropy of the system. Heat transfer entails entropy transfer , where is the absolute thermodynamic temperature of the system at the point of the heat flow. If there are mass flows across the system boundaries, they also influence the total entropy of the system. This account, in terms of heat and work, is valid only for cases in which the work and heat transfers are by paths physically distinct from the paths of entry and exit of matter from the system. To derive a generalized entropy balanced equation, we start with the general balance equation for the change in any extensive quantity in a thermodynamic system, a quantity that may be either conserved, such as energy, or non-conserved, such as entropy. The basic generic balance expression states that , i.e. the rate of change of in the system, equals the rate at which enters the system at the boundaries, minus the rate at which leaves the system across the system boundaries, plus the rate at which is generated within the system. For an open thermodynamic system in which heat and work are transferred by paths separate from the paths for transfer of matter, using this generic balance equation, with respect to the rate of change with time of the extensive quantity entropy , the entropy balance equation is:where is the net rate of entropy flow due to the flows of mass into and out of the system with entropy per unit mass , is the rate of entropy flow due to the flow of heat across the system boundary and is the rate of entropy generation within the system, e.g. by chemical reactions, phase transitions, internal heat transfer or frictional effects such as viscosity. In case of multiple heat flows the term is replaced by , where is the heat flow through -th port into the system and is the temperature at the -th port. The nomenclature "entropy balance" is misleading and often deemed inappropriate because entropy is not a conserved quantity. In other words, the term is never a known quantity but always a derived one based on the expression above. Therefore, the open system version of the second law is more appropriately described as the "entropy generation equation" since it specifies that:with zero for reversible process and positive values for irreversible one. Entropy change formulas for simple processes For certain simple transformations in systems of constant composition, the entropy changes are given by simple formulas. Isothermal expansion or compression of an ideal gas For the expansion (or compression) of an ideal gas from an initial volume and pressure to a final volume and pressure at any constant temperature, the change in entropy is given by:Here is the amount of gas (in moles) and is the ideal gas constant. These equations also apply for expansion into a finite vacuum or a throttling process, where the temperature, internal energy and enthalpy for an ideal gas remain constant. Cooling and heating For pure heating or cooling of any system (gas, liquid or solid) at constant pressure from an initial temperature to a final temperature , the entropy change is: provided that the constant-pressure molar heat capacity (or specific heat) is constant and that no phase transition occurs in this temperature interval. Similarly at constant volume, the entropy change is:where the constant-volume molar heat capacity is constant and there is no phase change. At low temperatures near absolute zero, heat capacities of solids quickly drop off to near zero, so the assumption of constant heat capacity does not apply. Since entropy is a state function, the entropy change of any process in which temperature and volume both vary is the same as for a path divided into two steps – heating at constant volume and expansion at constant temperature. For an ideal gas, the total entropy change is:Similarly if the temperature and pressure of an ideal gas both vary: Phase transitions Reversible phase transitions occur at constant temperature and pressure. The reversible heat is the enthalpy change for the transition, and the entropy change is the enthalpy change divided by the thermodynamic temperature. For fusion (i.e., melting) of a solid to a liquid at the melting point , the entropy of fusion is:Similarly, for vaporization of a liquid to a gas at the boiling point , the entropy of vaporization is: Approaches to understanding entropy As a fundamental aspect of thermodynamics and physics, several different approaches to entropy beyond that of Clausius and Boltzmann are valid. Standard textbook definitions The following is a list of additional definitions of entropy from a collection of textbooks: a measure of energy dispersal at a specific temperature. a measure of disorder in the universe or of the availability of the energy in a system to do work. a measure of a system's thermal energy per unit temperature that is unavailable for doing useful work. In Boltzmann's analysis in terms of constituent particles, entropy is a measure of the number of possible microscopic states (or microstates) of a system in thermodynamic equilibrium. Order and disorder Entropy is often loosely associated with the amount of order or disorder, or of chaos, in a thermodynamic system. The traditional qualitative description of entropy is that it refers to changes in the state of the system and is a measure of "molecular disorder" and the amount of wasted energy in a dynamical energy transformation from one state or form to another. In this direction, several recent authors have derived exact entropy formulas to account for and measure disorder and order in atomic and molecular assemblies. One of the simpler entropy order/disorder formulas is that derived in 1984 by thermodynamic physicist Peter Landsberg, based on a combination of thermodynamics and information theory arguments. He argues that when constraints operate on a system, such that it is prevented from entering one or more of its possible or permitted states, as contrasted with its forbidden states, the measure of the total amount of "disorder" in the system is given by:Similarly, the total amount of "order" in the system is given by:In which is the "disorder" capacity of the system, which is the entropy of the parts contained in the permitted ensemble, is the "information" capacity of the system, an expression similar to Shannon's channel capacity, and is the "order" capacity of the system. Energy dispersal The concept of entropy can be described qualitatively as a measure of energy dispersal at a specific temperature. Similar terms have been in use from early in the history of classical thermodynamics, and with the development of statistical thermodynamics and quantum theory, entropy changes have been described in terms of the mixing or "spreading" of the total energy of each constituent of a system over its particular quantized energy levels. Ambiguities in the terms disorder and chaos, which usually have meanings directly opposed to equilibrium, contribute to widespread confusion and hamper comprehension of entropy for most students. As the second law of thermodynamics shows, in an isolated system internal portions at different temperatures tend to adjust to a single uniform temperature and thus produce equilibrium. A recently developed educational approach avoids ambiguous terms and describes such spreading out of energy as dispersal, which leads to loss of the differentials required for work even though the total energy remains constant in accordance with the first law of thermodynamics (compare discussion in next section). Physical chemist Peter Atkins, in his textbook Physical Chemistry, introduces entropy with the statement that "spontaneous changes are always accompanied by a dispersal of energy or matter and often both". Relating entropy to energy usefulness It is possible (in a thermal context) to regard lower entropy as a measure of the effectiveness or usefulness of a particular quantity of energy. Energy supplied at a higher temperature (i.e. with low entropy) tends to be more useful than the same amount of energy available at a lower temperature. Mixing a hot parcel of a fluid with a cold one produces a parcel of intermediate temperature, in which the overall increase in entropy represents a "loss" that can never be replaced. As the entropy of the universe is steadily increasing, its total energy is becoming less useful. Eventually, this is theorized to lead to the heat death of the universe. Entropy and adiabatic accessibility A definition of entropy based entirely on the relation of adiabatic accessibility between equilibrium states was given by E. H. Lieb and J. Yngvason in 1999. This approach has several predecessors, including the pioneering work of Constantin Carathéodory from 1909 and the monograph by R. Giles. In the setting of Lieb and Yngvason, one starts by picking, for a unit amount of the substance under consideration, two reference states and such that the latter is adiabatically accessible from the former but not conversely. Defining the entropies of the reference states to be 0 and 1 respectively, the entropy of a state is defined as the largest number such that is adiabatically accessible from a composite state consisting of an amount in the state and a complementary amount, , in the state . A simple but important result within this setting is that entropy is uniquely determined, apart from a choice of unit and an additive constant for each chemical element, by the following properties: it is monotonic with respect to the relation of adiabatic accessibility, additive on composite systems, and extensive under scaling. Entropy in quantum mechanics In quantum statistical mechanics, the concept of entropy was developed by John von Neumann and is generally referred to as "von Neumann entropy":where is the density matrix, is the trace operator and is the Boltzmann constant. This upholds the correspondence principle, because in the classical limit, when the phases between the basis states are purely random, this expression is equivalent to the familiar classical definition of entropy for states with classical probabilities :i.e. in such a basis the density matrix is diagonal. Von Neumann established a rigorous mathematical framework for quantum mechanics with his work . He provided in this work a theory of measurement, where the usual notion of wave function collapse is described as an irreversible process (the so-called von Neumann or projective measurement). Using this concept, in conjunction with the density matrix he extended the classical concept of entropy into the quantum domain. Information theory When viewed in terms of information theory, the entropy state function is the amount of information in the system that is needed to fully specify the microstate of the system. Entropy is the measure of the amount of missing information before reception. Often called Shannon entropy, it was originally devised by Claude Shannon in 1948 to study the size of information of a transmitted message. The definition of information entropy is expressed in terms of a discrete set of probabilities so that:where the base of the logarithm determines the units (for example, the binary logarithm corresponds to bits). In the case of transmitted messages, these probabilities were the probabilities that a particular message was actually transmitted, and the entropy of the message system was a measure of the average size of information of a message. For the case of equal probabilities (i.e. each message is equally probable), the Shannon entropy (in bits) is just the number of binary questions needed to determine the content of the message. Most researchers consider information entropy and thermodynamic entropy directly linked to the same concept, while others argue that they are distinct. Both expressions are mathematically similar. If is the number of microstates that can yield a given macrostate, and each microstate has the same a priori probability, then that probability is . The Shannon entropy (in nats) is:and if entropy is measured in units of per nat, then the entropy is given by:which is the Boltzmann entropy formula, where is the Boltzmann constant, which may be interpreted as the thermodynamic entropy per nat. Some authors argue for dropping the word entropy for the function of information theory and using Shannon's other term, "uncertainty", instead. Measurement The entropy of a substance can be measured, although in an indirect way. The measurement, known as entropymetry, is done on a closed system with constant number of particles and constant volume , and it uses the definition of temperature in terms of entropy, while limiting energy exchange to heat :The resulting relation describes how entropy changes when a small amount of energy is introduced into the system at a certain temperature . The process of measurement goes as follows. First, a sample of the substance is cooled as close to absolute zero as possible. At such temperatures, the entropy approaches zerodue to the definition of temperature. Then, small amounts of heat are introduced into the sample and the change in temperature is recorded, until the temperature reaches a desired value (usually 25 °C). The obtained data allows the user to integrate the equation above, yielding the absolute value of entropy of the substance at the final temperature. This value of entropy is called calorimetric entropy. Interdisciplinary applications Although the concept of entropy was originally a thermodynamic concept, it has been adapted in other fields of study, including information theory, psychodynamics, thermoeconomics/ecological economics, and evolution. Philosophy and theoretical physics Entropy is the only quantity in the physical sciences that seems to imply a particular direction of progress, sometimes called an arrow of time. As time progresses, the second law of thermodynamics states that the entropy of an isolated system never decreases in large systems over significant periods of time. Hence, from this perspective, entropy measurement is thought of as a clock in these conditions. Biology Chiavazzo et al. proposed that where cave spiders choose to lay their eggs can be explained through entropy minimization. Entropy has been proven useful in the analysis of base pair sequences in DNA. Many entropy-based measures have been shown to distinguish between different structural regions of the genome, differentiate between coding and non-coding regions of DNA, and can also be applied for the recreation of evolutionary trees by determining the evolutionary distance between different species. Cosmology Assuming that a finite universe is an isolated system, the second law of thermodynamics states that its total entropy is continually increasing. It has been speculated, since the 19th century, that the universe is fated to a heat death in which all the energy ends up as a homogeneous distribution of thermal energy so that no more work can be extracted from any source. If the universe can be considered to have generally increasing entropy, then – as Roger Penrose has pointed out – gravity plays an important role in the increase because gravity causes dispersed matter to accumulate into stars, which collapse eventually into black holes. The entropy of a black hole is proportional to the surface area of the black hole's event horizon. Jacob Bekenstein and Stephen Hawking have shown that black holes have the maximum possible entropy of any object of equal size. This makes them likely end points of all entropy-increasing processes, if they are totally effective matter and energy traps. However, the escape of energy from black holes might be possible due to quantum activity (see Hawking radiation). The role of entropy in cosmology remains a controversial subject since the time of Ludwig Boltzmann. Recent work has cast some doubt on the heat death hypothesis and the applicability of any simple thermodynamic model to the universe in general. Although entropy does increase in the model of an expanding universe, the maximum possible entropy rises much more rapidly, moving the universe further from the heat death with time, not closer. This results in an "entropy gap" pushing the system further away from the posited heat death equilibrium. Other complicating factors, such as the energy density of the vacuum and macroscopic quantum effects, are difficult to reconcile with thermodynamical models, making any predictions of large-scale thermodynamics extremely difficult. Current theories suggest the entropy gap to have been originally opened up by the early rapid exponential expansion of the universe. Economics Romanian American economist Nicholas Georgescu-Roegen, a progenitor in economics and a paradigm founder of ecological economics, made extensive use of the entropy concept in his magnum opus on The Entropy Law and the Economic Process. Due to Georgescu-Roegen's work, the laws of thermodynamics form an integral part of the ecological economics school. Although his work was blemished somewhat by mistakes, a full chapter on the economics of Georgescu-Roegen has approvingly been included in one elementary physics textbook on the historical development of thermodynamics. In economics, Georgescu-Roegen's work has generated the term 'entropy pessimism'. Since the 1990s, leading ecological economist and steady-state theorist Herman Daly – a student of Georgescu-Roegen – has been the economics profession's most influential proponent of the entropy pessimism position. See also Boltzmann entropy Brownian ratchet Configuration entropy Conformational entropy Entropic explosion Entropic force Entropic value at risk Entropy and life Entropy unit Free entropy Harmonic entropy Info-metrics Negentropy (negative entropy) Phase space Principle of maximum entropy Residual entropy Thermodynamic potential Notes References Further reading Lambert, Frank L.; Sharp, Kim (2019). Entropy and the Tao of Counting: A Brief Introduction to Statistical Mechanics and the Second Law of Thermodynamics (SpringerBriefs in Physics). Springer Nature. . Spirax-Sarco Limited, Entropy – A Basic Understanding A primer on entropy tables for steam engineering External links "Entropy" at Scholarpedia Entropy and the Clausius inequality MIT OCW lecture, part of 5.60 Thermodynamics & Kinetics, Spring 2008 Entropy and the Second Law of Thermodynamics – an A-level physics lecture with 'derivation' of entropy based on Carnot cycle Khan Academy: entropy lectures, part of Chemistry playlist Entropy Intuition More on Entropy Proof: S (or Entropy) is a valid state variable Reconciling Thermodynamic and State Definitions of Entropy Thermodynamic Entropy Definition Clarification The Discovery of Entropy by Adam Shulman. Hour-long video, January 2013. The Second Law of Thermodynamics and Entropy – Yale OYC lecture, part of Fundamentals of Physics I (PHYS 200) Physical quantities Philosophy of thermal and statistical physics State functions Asymmetry Extensive quantities
0.770252
0.999795
0.770094
Intrinsic and extrinsic properties
In science and engineering, an intrinsic property is a property of a specified subject that exists itself or within the subject. An extrinsic property is not essential or inherent to the subject that is being characterized. For example, mass is an intrinsic property of any physical object, whereas weight is an extrinsic property that depends on the strength of the gravitational field in which the object is placed. Applications in science and engineering In materials science, an intrinsic property is independent of how much of a material is present and is independent of the form of the material, e.g., one large piece or a collection of small particles. Intrinsic properties are dependent mainly on the fundamental chemical composition and structure of the material. Extrinsic properties are differentiated as being dependent on the presence of avoidable chemical contaminants or structural defects. In biology, intrinsic effects originate from inside an organism or cell, such as an autoimmune disease or intrinsic immunity. In electronics and optics, intrinsic properties of devices (or systems of devices) are generally those that are free from the influence of various types of non-essential defects. Such defects may arise as a consequence of design imperfections, manufacturing errors, or operational extremes and can produce distinctive and often undesirable extrinsic properties. The identification, optimization, and control of both intrinsic and extrinsic properties are among the engineering tasks necessary to achieve the high performance and reliability of modern electrical and optical systems. See also Intensive and extensive properties Motivation References Abstraction Ontology Scientific phenomena
0.777316
0.990687
0.770077
Dynamic equilibrium
In chemistry, a dynamic equilibrium exists once a reversible reaction occurs. Substances transition between the reactants and products at equal rates, meaning there is no net change. Reactants and products are formed at such a rate that the concentration of neither changes. It is a particular example of a system in a steady state. In physics, concerning thermodynamics, a closed system is in thermodynamic equilibrium when reactions occur at such rates that the composition of the mixture does not change with time. Reactions do in fact occur, sometimes vigorously, but to such an extent that changes in composition cannot be observed. Equilibrium constants can be expressed in terms of the rate constants for reversible reactions. Examples In a new bottle of soda, the concentration of carbon dioxide in the liquid phase has a particular value. If half of the liquid is poured out and the bottle is sealed, carbon dioxide will leave the liquid phase at an ever-decreasing rate, and the partial pressure of carbon dioxide in the gas phase will increase until equilibrium is reached. At that point, due to thermal motion, a molecule of CO2 may leave the liquid phase, but within a very short time another molecule of CO2 will pass from the gas to the liquid, and vice versa. At equilibrium, the rate of transfer of CO2 from the gas to the liquid phase is equal to the rate from liquid to gas. In this case, the equilibrium concentration of CO2 in the liquid is given by Henry's law, which states that the solubility of a gas in a liquid is directly proportional to the partial pressure of that gas above the liquid. This relationship is written as where K is a temperature-dependent constant, P is the partial pressure, and c is the concentration of the dissolved gas in the liquid. Thus the partial pressure of CO2 in the gas has increased until Henry's law is obeyed. The concentration of carbon dioxide in the liquid has decreased and the drink has lost some of its fizz. Henry's law may be derived by setting the chemical potentials of carbon dioxide in the two phases to be equal to each other. Equality of chemical potential defines chemical equilibrium. Other constants for dynamic equilibrium involving phase changes, include partition coefficient and solubility product. Raoult's law defines the equilibrium vapor pressure of an ideal solution Dynamic equilibrium can also exist in a single-phase system. A simple example occurs with acid-base equilibrium such as the dissociation of acetic acid, in an aqueous solution. CH3COOH <=> CH3COO- + H+ At equilibrium the concentration quotient, K, the acid dissociation constant, is constant (subject to some conditions) In this case, the forward reaction involves the liberation of some protons from acetic acid molecules and the backward reaction involves the formation of acetic acid molecules when an acetate ion accepts a proton. Equilibrium is attained when the sum of chemical potentials of the species on the left-hand side of the equilibrium expression is equal to the sum of chemical potentials of the species on the right-hand side. At the same time, the rates of forward and backward reactions are equal to each other. Equilibria involving the formation of chemical complexes are also dynamic equilibria and concentrations are governed by the stability constants of complexes. Dynamic equilibria can also occur in the gas phase as, for example when nitrogen dioxide dimerizes. 2NO2 <=> N2O4; In the gas phase, square brackets indicate partial pressure. Alternatively, the partial pressure of a substance may be written as P(substance). Relationship between equilibrium and rate constants In a simple reaction such as the isomerization: A <=> B there are two reactions to consider, the forward reaction in which the species A is converted into B and the backward reaction in which B is converted into A. If both reactions are elementary reactions, then the rate of reaction is given by where is the rate constant for the forward reaction and is the rate constant for the backward reaction and the square brackets, , denote concentration. If only A is present at the beginning, time , with a concentration [A], the sum of the two concentrations, [A] and [B], at time , will be equal to [A]. The solution to this differential equation is and is illustrated at the right. As time tends towards infinity, the concentrations [A] and [B] tend towards constant values. Let approach infinity, that is, , in the expression above: In practice, concentration changes will not be measurable after Since the concentrations do not change thereafter, they are, by definition, equilibrium concentrations. Now, the equilibrium constant for the reaction is defined as It follows that the equilibrium constant is numerically equal to the quotient of the rate constants. In general, there may be more than one forward reaction and more than one backward reaction. Atkins states that, for a general reaction, the overall equilibrium constant is related to the rate constants of the elementary reactions by See also Equilibrium chemistry Mechanical equilibrium Chemical equilibrium Radiative equilibrium References External links Dynamic Equilibrium Example - Wolfram Demonstrations Project Equilibrium chemistry Thermodynamics
0.77964
0.987732
0.770076
Food science
Food science (or bromatology) is the basic science and applied science of food; its scope starts at overlap with agricultural science and nutritional science and leads through the scientific aspects of food safety and food processing, informing the development of food technology. Food science brings together multiple scientific disciplines. It incorporates concepts from fields such as chemistry, physics, physiology, microbiology, and biochemistry. Food technology incorporates concepts from chemical engineering, for example. Activities of food scientists include the development of new food products, design of processes to produce these foods, choice of packaging materials, shelf-life studies, sensory evaluation of products using survey panels or potential consumers, as well as microbiological and chemical testing. Food scientists may study more fundamental phenomena that are directly linked to the production of food products and its properties. Definition The Institute of Food Technologists defines food science as "the discipline in which the engineering, biological, and physical sciences are used to study the nature of foods, the causes of deterioration, the principles underlying food processing, and the improvement of foods for the consuming public". The textbook Food Science defines food science in simpler terms as "the application of sciences and engineering to study the physical, chemical, and biochemical nature of foods and the principles of food processing". Disciplines Some of the subdisciplines of food science are described below. Food chemistry Food chemistry is the study of chemical processes and interactions of all biological and non-biological components of foods. The biological substances include such items as meat, poultry, lettuce, beer, and milk. It is similar to biochemistry in its main components such as carbohydrates, lipids, and protein, but it also includes areas such as water, vitamins, minerals, enzymes, food additives, flavors, and colors. This discipline also encompasses how products change under certain food processing techniques and ways either to enhance or to prevent them from happening. Food physical chemistry Food physical chemistry is the study of both physical and chemical interactions in foods in terms of physical and chemical principles applied to food systems, as well as the application of physicochemical techniques and instrumentation for the study and analysis of foods. Food engineering Food engineering is the industrial processes used to manufacture food. It involves coming up with novel approaches for manufacturing, packaging, delivering, ensuring quality, ensuring safety, and devising techniques to transform raw ingredients into wholesome food options. Food microbiology Food microbiology is the study of the microorganisms that inhabit, create, or contaminate food, including the study of microorganisms causing food spoilage. "Good" bacteria, however, such as probiotics, are becoming increasingly important in food science. In addition, microorganisms are essential for the production of foods such as cheese, yogurt, bread, beer, wine and, other fermented foods. Food technology Food technology is the technological aspect. Early scientific research into food technology concentrated on food preservation. Nicolas Appert's development in 1810 of the canning process was a decisive event. The process was not called canning then and Appert did not really know the principle on which his process worked, but canning has had a major impact on food preservation techniques. Foodomics In 2009, Foodomics was defined as "a discipline that studies the Food and Nutrition domains through the application and integration of advanced -omics technologies to improve consumer's well-being, health, and knowledge". Foodomics requires the combination of food chemistry, biological sciences, and data analysis. Foodomics greatly helps scientists in the area of food science and nutrition to gain better access to data, which is used to analyze the effects of food on human health, etc. It is believed to be another step towards a better understanding of the development and application of technology and food. Moreover, the study of foodomics leads to other omics sub-disciplines, including nutrigenomics which is the integration of the study of nutrition, genes, and omics. Molecular gastronomy Molecular gastronomy is a subdiscipline of food science that seeks to investigate the physical and chemical transformations of ingredients that occur in cooking. Its program includes three axes, as cooking was recognized to have three components, which are social, artistic, and technical. Quality control Quality control involves the causes, prevention, and communication dealing with food-borne illness. Quality control also ensures that the product meets specs to ensure the customer receives what they expect from the packaging to the physical properties of the product itself. Sensory analysis Sensory analysis is the study of how consumer's senses perceive food. Careers in Food Science The five most common college degrees leading to a career in food science are: Food science/technology (66%), biological sciences (12%), business/marketing (10%), nutrition (9%) and chemistry (8%). Careers available to food scientists include food technologists, research and development (R&D), quality control, flavor chemistry, laboratory director, food analytical chemist and technical sales. The five most common positions for food scientists are food scientist/technologist (19%), product developer (12%), quality assurance/control director (8%), other R&D/scientific/technical (7%), and director of research (5%). By country Australia The Commonwealth Scientific and Industrial Research Organisation (CSIRO) is the federal government agency for scientific research in Australia. CSIRO maintains more than 50 sites across Australia and biological control research stations in France and Mexico. It has nearly 6,500 employees. South Korea The Korean Society of Food Science and Technology, or KoSFoST, claims to be the first society in South Korea for food science. United States In the United States, food science is typically studied at land-grant universities. Some of the country's pioneering food scientists were women who attended chemistry programs at land-grant universities which were state-run and largely under state mandates to allow for sex-blind admission. Although after graduation, they had difficulty finding jobs due to widespread sexism in the chemistry industry in the late 19th and early 20th centuries. Finding conventional career paths blocked, they found alternative employment as instructors in the home economics departments and used that as a base to launch the foundation of many modern food science programs. The main US organization regarding food science and food technology is the Institute of Food Technologists (IFT), headquartered in Chicago, Illinois, which is the US member organisation of the International Union of Food Science and Technology (IUFoST). See also Publications Books Food Science is an academic topic so most food science books are textbooks. Journals Notes and references Further reading Wanucha, Genevieve (February 24, 2009). "Two Happy Clams: The friendship that forged food science". MIT Technology Review. External links Learn about Food Science Applied sciences
0.775149
0.993441
0.770065
Bioorthogonal chemistry
The term bioorthogonal chemistry refers to any chemical reaction that can occur inside of living systems without interfering with native biochemical processes. The term was coined by Carolyn R. Bertozzi in 2003. Since its introduction, the concept of the bioorthogonal reaction has enabled the study of biomolecules such as glycans, proteins, and lipids in real time in living systems without cellular toxicity. A number of chemical ligation strategies have been developed that fulfill the requirements of bioorthogonality, including the 1,3-dipolar cycloaddition between azides and cyclooctynes (also termed copper-free click chemistry), between nitrones and cyclooctynes, oxime/hydrazone formation from aldehydes and ketones, the tetrazine ligation, the isocyanide-based click reaction, and most recently, the quadricyclane ligation. The use of bioorthogonal chemistry typically proceeds in two steps. First, a cellular substrate is modified with a bioorthogonal functional group (chemical reporter) and introduced to the cell; substrates include metabolites, enzyme inhibitors, etc. The chemical reporter must not alter the structure of the substrate dramatically to avoid affecting its bioactivity. Secondly, a probe containing the complementary functional group is introduced to react and label the substrate. Although effective bioorthogonal reactions such as copper-free click chemistry have been developed, development of new reactions continues to generate orthogonal methods for labeling to allow multiple methods of labeling to be used in the same biosystems. Carolyn R. Bertozzi was awarded the Nobel Prize in Chemistry in 2022 for her development of click chemistry and bioorthogonal chemistry. Etymology The word bioorthogonal comes from Greek bio- "living" and orthogōnios "right-angled". Thus literally a reaction that goes perpendicular to a living system, thus not disturbing it. Requirements for bioorthogonality To be considered bioorthogonal, a reaction must fulfill a number of requirements: Selectivity: The reaction must be selective between endogenous functional groups to avoid side reactions with biological compounds Biological inertness: Reactive partners and resulting linkage should not possess any mode of reactivity capable of disrupting the native chemical functionality of the organism under study. Chemical inertness: The covalent link should be strong and inert to biological reactions. Kinetics: The reaction must be rapid so that covalent ligation is achieved prior to probe metabolism and clearance. The reaction must be fast, on the time scale of cellular processes (minutes) to prevent competition in reactions which may diminish the small signals of less abundant species. Rapid reactions also offer a fast response, necessary in order to accurately track dynamic processes. Reaction biocompatibility: Reactions have to be non-toxic and must function in biological conditions taking into account pH, aqueous environments, and temperature. Pharmacokinetics are a growing concern as bioorthogonal chemistry expands to live animal models. Accessible engineering: The chemical reporter must be capable of incorporation into biomolecules via some form of metabolic or protein engineering. Optimally, one of the functional groups is also very small so that it does not disturb native behavior. Staudinger ligation The Staudinger ligation is a reaction developed by the Bertozzi group in 2000 that is based on the classic Staudinger reaction of azides with triarylphosphines. It launched the field of bioorthogonal chemistry as the first reaction with completely abiotic functional groups although it is no longer as widely used. The Staudinger ligation has been used in both live cells and live mice. Bioorthogonality The azide can act as a soft electrophile that prefers soft nucleophiles such as phosphines. This is in contrast to most biological nucleophiles which are typically hard nucleophiles. The reaction proceeds selectively under water-tolerant conditions to produce a stable product. Phosphines are completely absent from living systems and do not reduce disulfide bonds despite mild reduction potential. Azides had been shown to be biocompatible in FDA-approved drugs such as azidothymidine and through other uses as cross linkers. Additionally, their small size allows them to be easily incorporated into biomolecules through cellular metabolic pathways. Mechanism Classic Staudinger reaction The nucleophilic phosphine attacks the azide at the electrophilic terminal nitrogen. Through a four-membered transition state, N2 is lost to form an aza-ylide. The unstable ylide is hydrolyzed to form phosphine oxide and a primary amine. However, this reaction is not immediately bioorthogonal because hydrolysis breaks the covalent bond in the aza-ylide. Staudinger ligation The reaction was modified to include an ester group ortho to the phosphorus atom on one of the aryl rings to direct the aza-ylide through a new path of reactivity in order to outcompete immediate hydrolysis by positioning the ester to increase local concentration. The initial nucleophilic attack on the azide is the rate-limiting step. The ylide reacts with the electrophilic ester trap through intramolecular cyclization to form a five-membered ring. This ring undergoes hydrolysis to form a stable amide bond. Limitations The phosphine reagents slowly undergo air oxidation in living systems. Additionally, it is likely that they are metabolized in vitro by cytochrome P450 enzymes. The kinetics of the reactions are slow with second order rate constants around 0.0020 M−1•s−1. Attempts to increase nucleophilic attack rates by adding electron-donating groups to the phosphines improved kinetics, but also increased the rate of air oxidation. The poor kinetics require that high concentrations of the phosphine be used which leads to problems with high background signal in imaging applications. Attempts have been made to combat the problem of high background through the development of a fluorogenic phosphine reagents based on fluorescein and luciferin, but the intrinsic kinetics remain a limitation. Copper-free click chemistry Copper-free click chemistry is a bioorthogonal reaction first developed by Carolyn Bertozzi as an activated variant of an azide alkyne Huisgen cycloaddition, based on the work by Karl Barry Sharpless et al. Unlike CuAAC, Cu-free click chemistry has been modified to be bioorthogonal by eliminating a cytotoxic copper catalyst, allowing reaction to proceed quickly and without live cell toxicity. Instead of copper, the reaction is a strain-promoted alkyne-azide cycloaddition (SPAAC). It was developed as a faster alternative to the Staudinger ligation, with the first generations reacting over sixty times faster. The bioorthogonality of the reaction has allowed the Cu-free click reaction to be applied within cultured cells, live zebrafish, and mice. Copper toxicity The classic copper-catalyzed azide-alkyne cycloaddition has been an extremely fast and effective click reaction for bioconjugation, but it is not suitable for use in live cells due to the toxicity of Cu(I) ions. Toxicity is due to oxidative damage from reactive oxygen species formed by the copper catalysts. Copper complexes have also been found to induce changes in cellular metabolism and are taken up by cells. There has been some development of ligands to prevent biomolecule damage and facilitate removal in in vitro applications. However, it has been found that different ligand environments of complexes can still affect metabolism and uptake, introducing an unwelcome perturbation in cellular function. Bioorthogonality The azide group is particularly bioorthogonal because it is extremely small (favorable for cell permeability and avoids perturbations), metabolically stable, and does not naturally exist in cells and thus has no competing biological side reactions. Although azides are not the most reactive 1,3-dipole available for reaction, they are preferred for their relative lack of side reactions and stability in typical synthetic conditions. The alkyne is not as small, but it still has the stability and orthogonality necessary for in vivo labeling. Cyclooctynes are traditionally the most common cycloalkyne for labeling studies, as they are the smallest stable alkyne ring. Mechanism The reaction proceeds as a standard 1,3-dipolar cycloaddition, a type of asynchronous, concerted pericyclic shift. The ambivalent nature of the 1,3-dipole should make the identification of an electrophilic or nucleophilic center on the azide impossible such that the direction of the cyclic electron flow is meaningless. [p] However, computation has shown that the electron distribution amongst nitrogens causes the innermost nitrogen atom to bear the greatest negative charge. Regioselectivity Although the reaction produces a regioisomeric mixture of triazoles, the lack of regioselectivity in the reaction is not a major concern for most current applications. More regiospecific and less bioorthogonal requirements are best served by copper-catalyzed Huisgen cycloaddition, especially given the synthetic difficulty (compared to the addition of a terminal alkyne) of synthesizing a strained cyclooctyne. Development of cyclooctynes OCT was the first cyclooctyne developed for Cu-free click chemistry. While linear alkynes are unreactive at physiological temperatures, OCT was able readily react with azides in biological conditions while showing no toxicity. However, it was poorly water-soluble, and the kinetics were barely improved over the Staudinger ligation. ALO (aryl-less octyne) was developed to improve water solubility, but it still had poor kinetics. Monofluorinated (MOFO) and difluorinated (DIFO) cyclooctynes were created to increase the rate through the addition of electron-withdrawing fluorine substituents at the propargylic position. Fluorine is a good electron-withdrawing group in terms of synthetic accessibility and biological inertness. In particular, it cannot form an electrophilic Michael acceptor that may side-react with biological nucleophiles. DIBO (dibenzocyclooctyne) was developed as a fusion to two aryl rings, resulting in very high strain and a decrease in distortion energies. It was proposed that biaryl substitution increases ring strain and provides conjugation with the alkyne to improve reactivity. Although calculations have predicted that mono-aryl substitution would provide an optimal balance between steric clash (with azide molecule) and strain, monoarylated products have been shown to be unstable. BARAC (biarylazacyclooctynone) followed with the addition of an amide bond which adds an sp2-like center to increase rate by distortion. Amide resonance contributes additional strain without creating additional unsaturation which would lead to an unstable molecule. Additionally, the addition of a heteroatom into the cyclooctyne ring improves both solubility and pharmacokinetics of the molecule. BARAC has sufficient rate (and sensitivity) to the extent that washing away excess probe is unnecessary to reduce background. This makes it extremely useful in situations where washing is impossible as in real-time imaging or whole animal imaging. Although BARAC is extremely useful, its low stability requires that it must be stored at 0 °C, protected from light and oxygen. Further adjustments variations on BARAC to produce DIBAC/ADIBO were performed to add distal ring strain and reduce sterics around the alkyne to further increase reactivity. Keto-DIBO, in which the hydroxyl group has been converted to a ketone, has a three-fold increase in rate due to a change in ring conformation. Attempts to make a difluorobenzocyclooctyne (DIFBO) were unsuccessful due to the instability. Problems with DIFO with in vivo mouse studies illustrate the difficulty of producing bioorthogonal reactions. Although DIFO was extremely reactive in the labeling of cells, it performed poorly in mouse studies due to binding with serum albumin. Hydrophobicity of the cyclooctyne promotes sequestration by membranes and serum proteins, reducing bioavailable concentrations. In response, DIMAC (dimethoxyazacyclooctyne) was developed to increase water solubility, polarity, and pharmacokinetics, although efforts in bioorthogonal labeling of mouse models is still in development. Reactivity Computational efforts have been vital in explaining the thermodynamics and kinetics of these cycloaddition reactions which has played a vital role in continuing to improve the reaction. There are two methods for activating alkynes without sacrificing stability: decrease transition state energy or decrease reactant stability. Decreasing reactant stability: Houk has proposed that differences in the energy (Ed ‡) required to distort the azide and alkyne into the transition state geometries control the barrier heights for the reaction. The activation energy (E ‡) is the sum of destabilizing distortions and stabilizing interactions (Ei ‡). The most significant distortion is in the azide functional group with lesser contribution of alkyne distortion. However, it is only the cyclooctyne that can be easily modified for higher reactivity. Calculated barriers of reaction for phenyl azide and acetylene (16.2 kcal/mol) versus cyclooctyne (8.0 kcal/mol) results in a predicted rate increase of 106. The cyclooctyne requires less distortion energy (1.4 kcal/mol versus 4.6 kcal/mol) resulting in a lower activation energy despite smaller interaction energy. Decreasing transition state energy: Electron withdrawing groups such as fluorine increase rate by decreasing LUMO energy and the HOMO-LUMO gap. This leads to a greater charge transfer from the azide to the fluorinated cyclooctyne in the transition state, increasing interaction energy (lower negative value) and overall activation energy. The lowering of the LUMO is the result of hyperconjugation between alkyne π donor orbitals and CF σ* acceptors. These interactions provide stabilization primarily in the transition state as a result of increased donor/acceptor abilities of the bonds as they distort. NBO calculations have shown that transition state distortion increases the interaction energy by 2.8 kcal/mol. The hyperconjugation between out-of-plane π bonds is greater because the in-plane π bonds are poorly aligned. However, transition state bending allows the in-plane π bonds to have a more antiperiplanar arrangement that facilitates interaction. Additional hyperconjugative interaction energy stabilization is achieved through an increase in the electronic population of the σ* due to the forming CN bond. Negative hyperconjugation with the σ* CF bonds enhances this stabilizing interaction. Regioselectivity Although regioselectivity is not a great issue in the current imaging applications of copper-free click chemistry, it is an issue that prevents future applications in fields such as drug design or peptidomimetics. Currently most cyclooctynes react to form regioisomeric mixtures. [m] Computation analysis has found that while gas phase regioselectivity is calculated to favor 1,5 addition over 1,4 addition by up to 2.9 kcal/mol in activation energy, solvation corrections result in the same energy barriers for both regioisomers. While the 1,4 isomer in the cycloaddition of DIFO is disfavored by its larger dipole moment, solvation stabilizes it more strongly than the 1,5 isomer, eroding regioselectivity. Symmetrical cyclooctynes such as BCN (bicyclo[6.1.0]nonyne) form a single regioisomer upon cycloaddition and may serve to address this problem in the future. Applications The most widespread application of copper-free click chemistry is in biological imaging in live cells or animals using an azide-tagged biomolecule and a cyclooctyne bearing an imaging agent. Fluorescent keto and oxime variants of DIBO are used in fluoro-switch click reactions in which the fluorescence of the cyclooctyne is quenched by the triazole that forms in the reaction. On the other hand, coumarin-conjugated cyclooctynes such as coumBARAC have been developed such that the alkyne suppresses fluorescence while triazole formation increases the fluorescence quantum yield by ten-fold. Spatial and temporal control of substrate labeling has been investigated using photoactivatable cyclooctynes. This allows equilibration of the alkyne prior to reaction in order to reduce artifacts as a result of concentration gradients. Masked cyclooctynes are unable to react with azides in the dark but become reactive alkynes upon irradiation with light. Copper-free click chemistry is being explored for use in synthesizing PET imaging agents which must be made quickly with high purity and yield in order to minimize isotopic decay before the compounds can be administered. Both the high rate constants and the bioorthogonality of SPAAC are amenable to PET chemistry. Other bioorthogonal reactions Nitrone dipole cycloaddition Copper-free click chemistry has been adapted to use nitrones as the 1,3-dipole rather than azides and has been used in the modification of peptides. This cycloaddition between a nitrone and a cyclooctyne forms N-alkylated isoxazolines. The reaction rate is enhanced by water and is extremely fast with second order rate constants ranging from 12 to 32 M−1•s−1, depending on the substitution of the nitrone. Although the reaction is extremely fast, it faces problems in incorporating the nitrone into biomolecules through metabolic labeling. Labeling has only been achieved through post-translational peptide modification. Norbornene cycloaddition 1,3 dipolar cycloadditions have been developed as a bioorthogonal reaction using a nitrile oxide as a 1,3-dipole and a norbornene as a dipolarophile. Its primary use has been in labeling DNA and RNA in automated oligonucleotide synthesizers, and polymer crosslinking in the presence of living cells. Norbornenes were selected as dipolarophiles due to their balance between strain-promoted reactivity and stability. The drawbacks of this reaction include the cross-reactivity of the nitrile oxide due to strong electrophilicity and slow reaction kinetics. Oxanorbornadiene cycloaddition The oxanorbornadiene cycloaddition is a 1,3-dipolar cycloaddition followed by a retro-Diels Alder reaction to generate a triazole-linked conjugate with the elimination of a furan molecule. Preliminary work has established its usefulness in peptide labeling experiments, and it has also been used in the generation of SPECT imaging compounds. More recently, the use of an oxanorbornadiene was described in a catalyst-free room temperature "iClick" reaction, in which a model amino acid is linked to the metal moiety, in a novel approach to bioorthogonal reactions. Ring strain and electron deficiency in the oxanorbornadiene increase reactivity towards the cycloaddition rate-limiting step. The retro-Diels Alder reaction occurs quickly afterwards to form the stable 1,2,3 triazole. Problems include poor tolerance for substituents which may change electronics of the oxanorbornadiene and low rates (second order rate constants on the order of 10−4). Tetrazine ligation The tetrazine ligation is the reaction of a trans-cyclooctene and an s-tetrazine in an inverse-demand Diels Alder reaction followed by a retro-Diels Alder reaction to eliminate nitrogen gas. The reaction is extremely rapid with a second order rate constant of 2000 M−1–s−1 (in 9:1 methanol/water) allowing modifications of biomolecules at extremely low concentrations. Based on computational work by Bach, the strain energy for Z-cyclooctenes is 7.0 kcal/mol compared to 12.4 kcal/mol for cyclooctane due to a loss of two transannular interactions. E-cyclooctene has a highly twisted double bond resulting in a strain energy of 17.9 kcal/mol. As such, the highly strained trans-cyclooctene is used as a reactive dienophile. The diene is a 3,6-diaryl-s-tetrazine which has been substituted in order to resist immediate reaction with water. The reaction proceeds through an initial cycloaddition followed by a reverse Diels Alder to eliminate N2 and prevent reversibility of the reaction. Not only is the reaction tolerant of water, but it has been found that the rate increases in aqueous media. Reactions have also been performed using norbornenes as dienophiles at second order rates on the order of 1 M−1•s−1 in aqueous media. The reaction has been applied in labeling live cells and polymer coupling. [4+1] Cycloaddition This isocyanide click reaction is a [4+1] cycloaddition followed by a retro-Diels Alder elimination of N2. The reaction proceeds with an initial [4+1] cycloaddition followed by a reversion to eliminate a thermodynamic sink and prevent reversibility. This product is stable if a tertiary amine or isocyanopropanoate is used. If a secondary or primary isocyanide is used, the produce will form an imine which is quickly hydrolyzed. Isocyanide is a favored chemical reporter due to its small size, stability, non-toxicity, and absence in mammalian systems. However, the reaction is slow, with second order rate constants on the order of 10−2 M−1•s−1. Tetrazole photoclick chemistry Photoclick chemistry utilizes a photoinduced cycloelimination to release N2. This generates a short-lived 1,3 nitrile imine intermediate via the loss of nitrogen gas, which undergoes a 1,3-dipolar cycloaddition with an alkene to generate pyrazoline cycloadducts. Photoinduction takes place with a brief exposure to light (wavelength is tetrazole-dependent) to minimize photodamage to cells. The reaction is enhanced in aqueous conditions and generates a single regioisomer. The transient nitrile imine is highly reactive for 1,3-dipolar cycloaddition due to a bent structure which reduces distortion energy. Substitution with electron-donating groups on phenyl rings increases the HOMO energy, when placed on the 1,3 nitrile imine and increases the rate of reaction. Advantages of this approach include the ability to spatially or temporally control reaction and the ability to incorporate both alkenes and tetrazoles into biomolecules using simple biological methods such as genetic encoding. Additionally, the tetrazole can be designed to be fluorogenic in order to monitor progress of the reaction. Quadricyclane ligation The quadricyclane ligation utilizes a highly strained quadricyclane to undergo [2+2+2] cycloaddition with π systems. Quadricyclane is abiotic, unreactive with biomolecules (due to complete saturation), relatively small, and highly strained (~80 kcal/mol). However, it is highly stable at room temperature and in aqueous conditions at physiological pH. It is selectively able to react with electron-poor π systems but not simple alkenes, alkynes, or cyclooctynes. Bis(dithiobenzil)nickel(II) was chosen as a reaction partner out of a candidate screen based on reactivity. To prevent light-induced reversion to norbornadiene, diethyldithiocarbamate is added to chelate the nickel in the product. These reactions are enhanced by aqueous conditions with a second order rate constant of 0.25 M−1•s−1. Of particular interest is that it has been proven to be bioorthogonal to both oxime formation and copper-free click chemistry. Uses Bioorthogonal chemistry is an attractive tool for pretargeting experiments in nuclear imaging and radiotherapy. References Biochemical reactions Chemical biology 2003 neologisms
0.783665
0.982607
0.770035
Homothety
In mathematics, a homothety (or homothecy, or homogeneous dilation) is a transformation of an affine space determined by a point S called its center and a nonzero number called its ratio, which sends point to a point by the rule for a fixed number . Using position vectors: . In case of (Origin): , which is a uniform scaling and shows the meaning of special choices for : for one gets the identity mapping, for one gets the reflection at the center, For one gets the inverse mapping defined by . In Euclidean geometry homotheties are the similarities that fix a point and either preserve (if ) or reverse (if ) the direction of all vectors. Together with the translations, all homotheties of an affine (or Euclidean) space form a group, the group of dilations or homothety-translations. These are precisely the affine transformations with the property that the image of every line g is a line parallel to g. In projective geometry, a homothetic transformation is a similarity transformation (i.e., fixes a given elliptic involution) that leaves the line at infinity pointwise invariant. In Euclidean geometry, a homothety of ratio multiplies distances between points by , areas by and volumes by . Here is the ratio of magnification or dilation factor or scale factor or similitude ratio. Such a transformation can be called an enlargement if the scale factor exceeds 1. The above-mentioned fixed point S is called homothetic center or center of similarity or center of similitude. The term, coined by French mathematician Michel Chasles, is derived from two Greek elements: the prefix homo-, meaning "similar", and thesis, meaning "position". It describes the relationship between two figures of the same shape and orientation. For example, two Russian dolls looking in the same direction can be considered homothetic. Homotheties are used to scale the contents of computer screens; for example, smartphones, notebooks, and laptops. Properties The following properties hold in any dimension. Mapping lines, line segments and angles A homothety has the following properties: A line is mapped onto a parallel line. Hence: angles remain unchanged. The ratio of two line segments is preserved. Both properties show: A homothety is a similarity. Derivation of the properties: In order to make calculations easy it is assumed that the center is the origin: . A line with parametric representation is mapped onto the point set with equation , which is a line parallel to . The distance of two points is and the distance between their images. Hence, the ratio (quotient) of two line segments remains unchanged . In case of the calculation is analogous but a little extensive. Consequences: A triangle is mapped on a similar one. The homothetic image of a circle is a circle. The image of an ellipse is a similar one. i.e. the ratio of the two axes is unchanged. Graphical constructions using the intercept theorem If for a homothety with center the image of a point is given (see diagram) then the image of a second point , which lies not on line can be constructed graphically using the intercept theorem: is the common point th two lines and . The image of a point collinear with can be determined using . using a pantograph Before computers became ubiquitous, scalings of drawings were done by using a pantograph, a tool similar to a compass. Construction and geometrical background: Take 4 rods and assemble a mobile parallelogram with vertices such that the two rods meeting at are prolonged at the other end as shown in the diagram. Choose the ratio . On the prolonged rods mark the two points such that and . This is the case if (Instead of the location of the center can be prescribed. In this case the ratio is .) Attach the mobile rods rotatable at point . Vary the location of point and mark at each time point . Because of (see diagram) one gets from the intercept theorem that the points are collinear (lie on a line) and equation holds. That shows: the mapping is a homothety with center and ratio . Composition The composition of two homotheties with the same center is again a homothety with center . The homotheties with center form a group. The composition of two homotheties with different centers and its ratios is in case of a homothety with its center on line and ratio or in case of a translation in direction . Especially, if (point reflections). Derivation: For the composition of the two homotheties with centers with one gets by calculation for the image of point : . Hence, the composition is in case of a translation in direction by vector . in case of point is a fixpoint (is not moved) and the composition . is a homothety with center and ratio . lies on line . The composition of a homothety and a translation is a homothety. Derivation: The composition of the homothety and the translation is which is a homothety with center and ratio . In homogenous coordinates The homothety with center can be written as the composition of a homothety with center and a translation: . Hence can be represented in homogeneous coordinates by the matrix: A pure homothety linear transformation is also conformal because it is composed of translation and uniform scale. See also Scaling (geometry) a similar notion in vector spaces Homothetic center, the center of a homothetic transformation taking one of a pair of shapes into the other The Hadwiger conjecture on the number of strictly smaller homothetic copies of a convex body that may be needed to cover it Homothetic function (economics), a function of the form f(U(y)) in which U is a homogeneous function and f is a monotonically increasing function. Notes References H.S.M. Coxeter, "Introduction to geometry" , Wiley (1961), p. 94 External links Homothety, interactive applet from Cut-the-Knot. Transformation (function)
0.783737
0.982515
0.770033
Reflexivity (social theory)
In epistemology, and more specifically, the sociology of knowledge, reflexivity refers to circular relationships between cause and effect, especially as embedded in human belief structures. A reflexive relationship is multi-directional when the causes and the effects affect the reflexive agent in a layered or complex sociological relationship. The complexity of this relationship can be furthered when epistemology includes religion. Within sociology more broadly—the field of origin—reflexivity means an act of self-reference where existence engenders examination, by which the thinking action "bends back on", refers to, and affects the entity instigating the action or examination. It commonly refers to the capacity of an agent to recognise forces of socialisation and alter their place in the social structure. A low level of reflexivity would result in individuals shaped largely by their environment (or "society"). A high level of social reflexivity would be defined by individuals shaping their own norms, tastes, politics, desires, and so on. This is similar to the notion of autonomy. (See also structure and agency and social mobility.) Within economics, reflexivity refers to the self-reinforcing effect of market sentiment, whereby rising prices attract buyers whose actions drive prices higher still until the process becomes unsustainable. This is an instance of a positive feedback loop. The same process can operate in reverse leading to a catastrophic collapse in prices. Overview In social theory, reflexivity may occur when theories in a discipline should apply equally to the discipline itself; for example, in the case that the theories of knowledge construction in the field of sociology of scientific knowledge should apply equally to knowledge construction by sociology of scientific knowledge practitioners, or when the subject matter of a discipline should apply equally to the individual practitioners of that discipline (e.g., when psychological theory should explain the psychological processes of psychologists). More broadly, reflexivity is considered to occur when the observations of observers in the social system affect the very situations they are observing, or when theory being formulated is disseminated to and affects the behaviour of the individuals or systems the theory is meant to be objectively modelling. Thus, for example, an anthropologist living in an isolated village may affect the village and the behaviour of its citizens under study. The observations are not independent of the participation of the observer. Reflexivity is, therefore, a methodological issue in the social sciences analogous to the observer effect. Within that part of recent sociology of science that has been called the strong programme, reflexivity is suggested as a methodological norm or principle, meaning that a full theoretical account of the social construction of, say, scientific, religious or ethical knowledge systems, should itself be explainable by the same principles and methods as used for accounting for these other knowledge systems. This points to a general feature of naturalised epistemologies, that such theories of knowledge allow for specific fields of research to elucidate other fields as part of an overall self-reflective process: any particular field of research occupied with aspects of knowledge processes in general (e.g., history of science, cognitive science, sociology of science, psychology of perception, semiotics, logic, neuroscience) may reflexively study other such fields yielding to an overall improved reflection on the conditions for creating knowledge. Reflexivity includes both a subjective process of self-consciousness inquiry and the study of social behaviour with reference to theories about social relationships. History The principle of reflexivity was perhaps first enunciated by the sociologists William I. Thomas and Dorothy Swaine Thomas, in their 1928 book The child in America: "If men define situations as real, they are real in their consequences". The theory was later termed the "Thomas theorem". Sociologist Robert K. Merton (1948, 1949) built on the Thomas principle to define the notion of a self-fulfilling prophecy: that once a prediction or prophecy is made, actors may accommodate their behaviours and actions so that a statement that would have been false becomes true or, conversely, a statement that would have been true becomes false - as a consequence of the prediction or prophecy being made. The prophecy has a constitutive impact on the outcome or result, changing the outcome from what would otherwise have happened. Reflexivity was taken up as an issue in science in general by Karl Popper (1957), who in his book The poverty of historicism highlighted the influence of a prediction upon the event predicted, calling this the 'Oedipus effect' in reference to the Greek tale in which the sequence of events fulfilling the Oracle's prophecy is greatly influenced by the prophecy itself. Popper initially considered such self-fulfilling prophecy a distinguishing feature of social science, but later came to see that in the natural sciences, particularly biology and even molecular biology, something equivalent to expectation comes into play and can act to bring about that which has been expected. It was also taken up by Ernest Nagel (1961). Reflexivity presents a problem for science because if a prediction can lead to changes in the system that the prediction is made in relation to, it becomes difficult to assess scientific hypotheses by comparing the predictions they entail with the events that actually occur. The problem is even more difficult in the social sciences. Reflexivity has been taken up as the issue of "reflexive prediction" in economic science by Grunberg and Modigliani (1954) and Herbert A. Simon (1954), has been debated as a major issue in relation to the Lucas critique, and has been raised as a methodological issue in economic science arising from the issue of reflexivity in the sociology of scientific knowledge (SSK) literature. Reflexivity has emerged as both an issue and a solution in modern approaches to the problem of structure and agency, for example in the work of Anthony Giddens in his structuration theory and Pierre Bourdieu in his genetic structuralism. Giddens, for example, noted that constitutive reflexivity is possible in any social system, and that this presents a distinct methodological problem for the social sciences. Giddens accentuated this theme with his notion of "reflexive modernity" – the argument that, over time, society is becoming increasingly more self-aware, reflective, and hence reflexive. Bourdieu argued that the social scientist is inherently laden with biases, and only by becoming reflexively aware of those biases can the social scientists free themselves from them and aspire to the practice of an objective science. For Bourdieu, therefore, reflexivity is part of the solution, not the problem. Michel Foucault's The order of things can be said to touch on the issue of Reflexivity. Foucault examines the history of Western thought since the Renaissance and argues that each historical epoch (he identifies three and proposes a fourth) has an episteme, or "a historical a priori", that structures and organises knowledge. Foucault argues that the concept of man emerged in the early 19th century, what he calls the "Age of Man", with the philosophy of Immanuel Kant. He finishes the book by posing the problem of the age of man and our pursuit of knowledge- where "man is both knowing subject and the object of his own study"; thus, Foucault argues that the social sciences, far from being objective, produce truth in their own mutually exclusive discourses. In economics Economic philosopher George Soros, influenced by ideas put forward by his tutor, Karl Popper (1957), has been an active promoter of the relevance of reflexivity to economics, first propounding it publicly in his 1987 book The alchemy of finance. He regards his insights into market behaviour from applying the principle as a major factor in the success of his financial career. Reflexivity is inconsistent with general equilibrium theory, which stipulates that markets move towards equilibrium and that non-equilibrium fluctuations are merely random noise that will soon be corrected. In equilibrium theory, prices in the long run at equilibrium reflect the underlying economic fundamentals, which are unaffected by prices. Reflexivity asserts that prices do in fact influence the fundamentals and that these newly influenced sets of fundamentals then proceed to change expectations, thus influencing prices; the process continues in a self-reinforcing pattern. Because the pattern is self-reinforcing, markets tend towards disequilibrium. Sooner or later they reach a point where the sentiment is reversed and negative expectations become self-reinforcing in the downward direction, thereby explaining the familiar pattern of boom and bust cycles. An example Soros cites is the procyclical nature of lending, that is, the willingness of banks to ease lending standards for real estate loans when prices are rising, then raising standards when real estate prices are falling, reinforcing the boom and bust cycle. He further suggests that property price inflation is essentially a reflexive phenomenon: house prices are influenced by the sums that banks are prepared to advance for their purchase, and these sums are determined by the banks' estimation of the prices that the property would command. Soros has often claimed that his grasp of the principle of reflexivity is what has given him his "edge" and that it is the major factor contributing to his successes as a trader. For several decades there was little sign of the principle being accepted in mainstream economic circles, but there has been an increase of interest following the crash of 2008, with academic journals, economists, and investors discussing his theories. Economist and former columnist of the Financial Times, Anatole Kaletsky, argued that Soros' concept of reflexivity is useful in understanding China's economy and how the Chinese government manages it. In 2009, Soros funded the launch of the Institute for New Economic Thinking with the hope that it would develop reflexivity further. The Institute works with several types of heterodox economics, particularly the post-Keynesian branch. In sociology Margaret Archer has written extensively on laypeople's reflexivity. For her, human reflexivity is a mediating mechanism between structural properties, or the individual's social context, and action, or the individual's ultimate concerns. Reflexive activity, according to Archer, increasingly takes the place of habitual action in late modernity since routine forms prove ineffective in dealing with the complexity of modern life trajectories. While Archer emphasises the agentic aspect of reflexivity, reflexive orientations can themselves be seen as being "socially and temporally embedded". For example, Elster points out that reflexivity cannot be understood without taking into account the fact that it draws on background configurations (e.g., shared meanings, as well as past social engagement and lived experiences of the social world) to be operative. In anthropology In anthropology, reflexivity has come to have two distinct meanings, one that refers to the researcher's awareness of an analytic focus on his or her relationship to the field of study, and the other that attends to the ways that cultural practices involve consciousness and commentary on themselves. The first sense of reflexivity in anthropology is part of social science's more general self-critique in the wake of theories by Michel Foucault and others about the relationship of power and knowledge production. Reflexivity about the research process became an important part of the critique of the colonial roots and scientistic methods of anthropology in the "writing cultures" movement associated with James Clifford and George Marcus, as well as many other anthropologists. Rooted in literary criticism and philosophical analysis of the relationship among the anthropologists, the people represented in texts, and their textual representations, this approach has fundamentally changed ethical and methodological approaches in anthropology. As with the feminist and anti-colonial critiques that provide some of reflexive anthropology's inspiration, the reflexive understanding of the academic and political power of representations, analysis of the process of "writing culture" has become a necessary part of understanding the situation of the ethnographer in the fieldwork situation. Objectification of people and cultures and analysis of them only as objects of study has been largely rejected in favor of developing more collaborative approaches that respect local people's values and goals. Nonetheless, many anthropologists have accused the "writing cultures" approach of muddying the scientific aspects of anthropology with too much introspection about fieldwork relationships, and reflexive anthropology have been heavily attacked by more positivist anthropologists. Considerable debate continues in anthropology over the role of postmodernism and reflexivity, but most anthropologists accept the value of the critical perspective, and generally only argue about the relevance of critical models that seem to lead anthropology away from its earlier core foci. The second kind of reflexivity studied by anthropologists involves varieties of self-reference in which people and cultural practices call attention to themselves. One important origin for this approach is Roman Jakobson in his studies of deixis and the poetic function in language, but the work of Mikhail Bakhtin on carnival has also been important. Within anthropology, Gregory Bateson developed ideas about meta-messages (subtext) as part of communication, while Clifford Geertz's studies of ritual events such as the Balinese cock-fight point to their role as foci for public reflection on the social order. Studies of play and tricksters further expanded ideas about reflexive cultural practices. Reflexivity has been most intensively explored in studies of performance, public events, rituals, and linguistic forms but can be seen any time acts, things, or people are held up and commented upon or otherwise set apart for consideration. In researching cultural practices, reflexivity plays an important role, but because of its complexity and subtlety, it often goes under-investigated or involves highly specialised analyses. One use of studying reflexivity is in connection to authenticity. Cultural traditions are often imagined as perpetuated as stable ideals by uncreative actors. Innovation may or may not change tradition, but since reflexivity is intrinsic to many cultural activities, reflexivity is part of tradition and not inauthentic. The study of reflexivity shows that people have both self-awareness and creativity in culture. They can play with, comment upon, debate, modify, and objectify culture through manipulating many different features in recognised ways. This leads to the metaculture of conventions about managing and reflecting upon culture. In international relations In international relations, the question of reflexivity was first raised in the context of the so-called ‘Third Debate’ of the late 1980s. This debate marked a break with the positivist orthodoxy of the discipline. The post-positivist theoretical restructuring was seen to introduce reflexivity as a cornerstone of critical scholarship. For Mark Neufeld, reflexivity in International Relations was characterized by 1) self-awareness of underlying premises, 2) an acknowledgment of the political-normative dimension of theoretical paradigms, and 3) the affirmation that judgement about the merits of paradigms is possible despite the impossibility of neutral or apolitical knowledge production. Since the nineties, reflexivity has become an explicit concern of constructivist, poststructuralist, feminist, and other critical approaches to International Relations. In The Conduct of Inquiry in International Relations, Patrick Thaddeus Jackson identified reflexivity of one of the four main methodologies into which contemporary International Relations research can be divided, alongside neopositivism, critical realism, and analyticism. Reflexivity and the status of the social sciences Flanagan has argued that reflexivity complicates all three of the traditional roles that are typically played by a classical science: explanation, prediction and control. The fact that individuals and social collectivities are capable of self-inquiry and adaptation is a key characteristic of real-world social systems, differentiating the social sciences from the physical sciences. Reflexivity, therefore, raises real issues regarding the extent to which the social sciences may ever be viewed as "hard" sciences analogous to classical physics, and raises questions about the nature of the social sciences. Methods for the implementation of reflexivity A new generation of scholars has gone beyond (meta-)theoretical discussion to develop concrete research practices for the implementation of reflexivity. These scholars have addressed the ‘how to’ question by turning reflexivity from an informal process into a formal research practice. While most research focuses on how scholars can become more reflexive toward their positionality and situatedness, some have sought to build reflexive methods in relation to other processes of knowledge production, such as the use of language. The latter has been advanced by the work of Professor Audrey Alejandro in a trilogy on reflexive methods. The first article of the trilogy develops what is referred to as Reflexive Discourse Analysis, a critical methodology for the implementation of reflexivity that integrates discourse theory. The second article further expands the methodological tools for practicing reflexivity by introducing a three-stage research method for problematizing linguistic categories. The final piece of the trilogy adds a further method for linguistic reflexivity, namely the Reflexive Review. This method provides four steps that aim to add a linguistic and reflexive dimension to the practice of writing a literature review. See also References Further reading Bryant, C. G. A. (2002). "George Soros's theory of reflexivity: a comparison with the theories of Giddens and Beck and a consideration of its practical value", Economy and society, 31 (1), pp. 112–131. Flanagan, O. J. (1981). "Psychology, progress, and the problem of reflexivity: a study in the epistemological foundations of psychology", Journal of the history of the behavioral sciences, 17, pp. 375–386. Gay, D. (2009). Reflexivity and development economics. London: Palgrave Macmillan Grunberg, E. and F. Modigliani (1954). "The predictability of social events", Journal of political economy, 62 (6), pp. 465–478. Merton, R. K. (1948). "The self-fulfilling prophecy", Antioch Review, 8, pp. 193–210. Merton, R. K. (1949/1957), Social theory and social structure. Rev. ed. The Free Press, Glencoe, IL. Nagel, E. (1961), The structure of science: problems in the logic of scientific explanation, Harcourt, New York. Popper, K. (1957), The poverty of historicism, Harper and Row, New York. Simon, H. (1954). "Bandwagon and underdog effects of election predictions", Public opinion quarterly, 18, pp. 245–253. Soros, G (1987) The alchemy of finance (Simon & Schuster, 1988) (paperback: Wiley, 2003; ) Soros, G (2008) The new paradigm for financial markets: the credit crisis of 2008 and what it means (PublicAffairs, 2008) Soros, G (2006) The age of fallibility: consequences of the war on terror (PublicAffairs, 2006) Soros, G The bubble of American supremacy: correcting the misuse of American power (PublicAffairs, 2003) (paperback; PublicAffairs, 2004; ) Soros, G George Soros on globalization (PublicAffairs, 2002) (paperback; PublicAffairs, 2005; ) Soros, G (2000) Open society: reforming global capitalism (PublicAffairs, 2001) Thomas, W. I. (1923), The unadjusted girl : with cases and standpoint for behavior analysis, Little, Brown, Boston, MA. Thomas, W. I. and D. S. Thomas (1928), The child in America : behavior problems and programs, Knopf, New York. Tsekeris, C. (2013). "Toward a chaos-friendly reflexivity", Entelequia, 16, pp. 71–89. Woolgar, S. (1988). Knowledge and reflexivity: new frontiers in the sociology of knowledge. London and Beverly Hills: Sage. Sociological terminology Sociological theories George Soros Self-reference
0.77425
0.994522
0.770008
Abstraction
Abstraction is a process where general rules and concepts are derived from the use and classifying of specific examples, literal (real or concrete) signifiers, first principles, or other methods. "An abstraction" is the outcome of this process — a concept that acts as a common noun for all subordinate concepts and connects any related concepts as a group, field, or category. Conceptual abstractions may be made by filtering the information content of a concept or an observable phenomenon, selecting only those aspects which are relevant for a particular purpose. For example, abstracting a leather soccer ball to the more general idea of a ball selects only the information on general ball attributes and behavior, excluding but not eliminating the other phenomenal and cognitive characteristics of that particular ball. In a type–token distinction, a type (e.g., a 'ball') is more abstract than its tokens (e.g., 'that leather soccer ball'). Abstraction in its secondary use is a material process, discussed in the themes below. Origins Thinking in abstractions is considered by anthropologists, archaeologists, and sociologists to be one of the key traits in modern human behaviour, which is believed to have developed between 50,000 and 100,000 years ago. Its development is likely to have been closely connected with the development of human language, which (whether spoken or written) appears to both involve and facilitate abstract thinking. History Abstraction involves induction of ideas or the synthesis of particular facts into one general theory about something. It is the opposite of specification, which is the analysis or breaking-down of a general idea or abstraction into concrete facts. Abstraction can be illustrated by Francis Bacon's Novum Organum (1620), a book of modern scientific philosophy written in the late Jacobean era of England to encourage modern thinkers to collect specific facts before making any generalizations. Bacon used and promoted induction as an abstraction tool; it complemented but was distinct from the ancient deductive-thinking approach that had dominated the intellectual world since the times of Greek philosophers like Thales, Anaximander, and Aristotle. Thales (–546 BCE) believed that everything in the universe comes from one main substance, water. He deduced or specified from a general idea, "everything is water," to the specific forms of water such as ice, snow, fog, and rivers. Modern scientists used the approach of abstraction (going from particular facts collected into one general idea). Newton (1642–1727) derived the motion of the planets from Copernicus' (1473–1543) simplification, that the Sun is the center of the Solar System; Kepler (1571–1630) compressed thousands of measurements into one expression to finally conclude that Mars moves in an elliptical orbit about the Sun; Galileo (1564–1642) repeated one hundred specific experiments into the law of falling bodies. Themes Compression An abstraction can be seen as a compression process, mapping multiple different pieces of constituent data to a single piece of abstract data; based on similarities in the constituent data, for example, many different physical cats map to the abstraction "CAT". This conceptual scheme emphasizes the inherent equality of both constituent and abstract data, thus avoiding problems arising from the distinction between "abstract" and "concrete". In this sense the process of abstraction entails the identification of similarities between objects, and the process of associating these objects with an abstraction (which is itself an object). For example, picture 1 below illustrates the concrete relationship "Cat sits on Mat". Chains of abstractions can be construed, moving from neural impulses arising from sensory perception to basic abstractions such as color or shape, to experiential abstractions such as a specific cat, to semantic abstractions such as the "idea" of a CAT, to classes of objects such as "mammals" and even categories such as "object" as opposed to "action". For example, graph 1 below expresses the abstraction "agent sits on location". This conceptual scheme entails no specific hierarchical taxonomy (such as the one mentioned involving cats and mammals), only a progressive exclusion of detail. Instantiation Non-existent things in any particular place and time are often seen as abstract. By contrast, instances, or members, of such an abstract thing might exist in many different places and times. Those abstract things are then said to be multiply instantiated, in the sense of picture 1, picture 2, etc., shown below. It is not sufficient, however, to define abstract ideas as those that can be instantiated and to define abstraction as the movement in the opposite direction to instantiation. Doing so would make the concepts "cat" and "telephone" abstract ideas since despite their varying appearances, a particular cat or a particular telephone is an instance of the concept "cat" or the concept "telephone". Although the concepts "cat" and "telephone" are abstractions, they are not abstract in the sense of the objects in graph 1 below. We might look at other graphs, in a progression from cat to mammal to animal, and see that animal is more abstract than mammal; but on the other hand mammal is a harder idea to express, certainly in relation to marsupial or monotreme. Perhaps confusingly, some philosophies refer to tropes (instances of properties) as abstract particulars—e.g., the particular redness of a particular apple is an abstract particular. This is similar to qualia and sumbebekos. Material process Still retaining the primary meaning of '' or 'to draw away from', the abstraction of money, for example, works by drawing away from the particular value of things allowing completely incommensurate objects to be compared (see the section on 'Physicality' below). Karl Marx's writing on the commodity abstraction recognizes a parallel process. The state (polity) as both concept and material practice exemplifies the two sides of this process of abstraction. Conceptually, 'the current concept of the state is an abstraction from the much more concrete early-modern use as the standing or status of the prince, his visible estates'. At the same time, materially, the 'practice of statehood is now constitutively and materially more abstract than at the time when princes ruled as the embodiment of extended power'. Ontological status The way that physical objects, like rocks and trees, have being differs from the way that properties of abstract concepts or relations have being, for example the way the concrete, particular, individuals pictured in picture 1 exist differs from the way the concepts illustrated in graph 1 exist. That difference accounts for the ontological usefulness of the word "abstract". The word applies to properties and relations to mark the fact that, if they exist, they do not exist in space or time, but that instances of them can exist, potentially in many different places and times. Physicality A physical object (a possible referent of a concept or word) is considered concrete (not abstract) if it is a particular individual that occupies a particular place and time. However, in the secondary sense of the term 'abstraction', this physical object can carry materially abstracting processes. For example, record-keeping aids throughout the Fertile Crescent included calculi (clay spheres, cones, etc.) which represented counts of items, probably livestock or grains, sealed in containers. According to , these clay containers contained tokens, the total of which were the count of objects being transferred. The containers thus served as something of a bill of lading or an accounts book. In order to avoid breaking open the containers for the count, marks were placed on the outside of the containers. These physical marks, in other words, acted as material abstractions of a materially abstract process of accounting, using conceptual abstractions (numbers) to communicate its meaning. Abstract things are sometimes defined as those things that do not exist in reality or exist only as sensory experiences, like the color red. That definition, however, suffers from the difficulty of deciding which things are real (i.e. which things exist in reality). For example, it is difficult to agree to whether concepts like God, the number three, and goodness are real, abstract, or both. An approach to resolving such difficulty is to use predicates as a general term for whether things are variously real, abstract, concrete, or of a particular property (e.g., good). Questions about the properties of things are then propositions about predicates, which propositions remain to be evaluated by the investigator. In the graph 1 below, the graphical relationships like the arrows joining boxes and ellipses might denote predicates. Referencing and referring Abstractions sometimes have ambiguous referents. For example, "happiness" can mean experiencing various positive emotions, but can also refer to life satisfaction and subjective well-being. Likewise, "architecture" refers not only to the design of safe, functional buildings, but also to elements of creation and innovation which aim at elegant solutions to construction problems, to the use of space, and to the attempt to evoke an emotional response in the builders, owners, viewers and users of the building. Simplification and ordering Abstraction uses a strategy of simplification, wherein formerly concrete details are left ambiguous, vague, or undefined; thus effective communication about things in the abstract requires an intuitive or common experience between the communicator and the communication recipient. This is true for all verbal/abstract communication. For example, many different things can be red. Likewise, many things sit on surfaces (as in picture 1, to the right). The property of redness and the relation sitting-on are therefore abstractions of those objects. Specifically, the conceptual diagram graph 1 identifies only three boxes, two ellipses, and four arrows (and their five labels), whereas the picture 1 shows much more pictorial detail, with the scores of implied relationships as implicit in the picture rather than with the nine explicit details in the graph. Graph 1 details some explicit relationships between the objects of the diagram. For example, the arrow between the agent and CAT:Elsie depicts an example of an is-a relationship, as does the arrow between the location and the MAT. The arrows between the gerund/present participle SITTING and the nouns agent and location express the diagram's basic relationship; "agent is SITTING on location"; Elsie is an instance of CAT. Although the description sitting-on (graph 1) is more abstract than the graphic image of a cat sitting on a mat (picture 1), the delineation of abstract things from concrete things is somewhat ambiguous; this ambiguity or vagueness is characteristic of abstraction. Thus something as simple as a newspaper might be specified to six levels, as in Douglas Hofstadter's illustration of that ambiguity, with a progression from abstract to concrete in Gödel, Escher, Bach (1979): An abstraction can thus encapsulate each of these levels of detail with no loss of generality. But perhaps a detective or philosopher/scientist/engineer might seek to learn about something, at progressively deeper levels of detail, to solve a crime or a puzzle. Thought processes In philosophical terminology, abstraction is the thought process wherein ideas are distanced from objects. But an idea can be symbolized. As used in different disciplines In art Typically, abstraction is used in the arts as a synonym for abstract art in general. Strictly speaking, it refers to art unconcerned with the literal depiction of things from the visible world—it can, however, refer to an object or image which has been distilled from the real world, or indeed, another work of art. Artwork that reshapes the natural world for expressive purposes is called abstract; that which derives from, but does not imitate a recognizable subject is called nonobjective abstraction. In the 20th century the trend toward abstraction coincided with advances in science, technology, and changes in urban life, eventually reflecting an interest in psychoanalytic theory. Later still, abstraction was manifest in more purely formal terms, such as color, freedom from objective context, and a reduction of form to basic geometric designs. In computer science Computer scientists use abstraction to make models that can be used and re-used without having to re-write all the program code for each new application on every different type of computer. They communicate their solutions with the computer by writing source code in some particular computer language which can be translated into machine code for different types of computers to execute. Abstraction allows program designers to separate a framework (categorical concepts related to computing problems) from specific instances which implement details. This means that the program code can be written so that code does not have to depend on the specific details of supporting applications, operating system software, or hardware, but on a categorical concept of the solution. A solution to the problem can then be integrated into the system framework with minimal additional work. This allows programmers to take advantage of another programmer's work, while requiring only an abstract understanding of the implementation of another's work, apart from the problem that it solves. In general semantics Abstractions and levels of abstraction play an important role in the theory of general semantics originated by Alfred Korzybski. Anatol Rapoport wrote "Abstracting is a mechanism by which an infinite variety of experiences can be mapped on short noises (words)." In history Francis Fukuyama defines history as "a deliberate attempt of abstraction in which we separate out important from unimportant events". In linguistics Researchers in linguistics frequently apply abstraction so as to allow an analysis of the phenomena of language at the desired level of detail. A commonly used abstraction, the phoneme, abstracts speech sounds in such a way as to neglect details that cannot serve to differentiate meaning. Other analogous kinds of abstractions (sometimes called "emic units") considered by linguists include morphemes, graphemes, and lexemes. Abstraction also arises in the relation between syntax, semantics, and pragmatics. Pragmatics involves considerations that make reference to the user of the language; semantics considers expressions and what they denote (the designata) abstracted from the language user; and syntax considers only the expressions themselves, abstracted from the designata. In mathematics Abstraction in mathematics is the process of extracting the underlying structures, patterns or properties of a mathematical concept or object, removing any dependence on real-world objects with which it might originally have been connected, and generalizing it so that it has wider applications or matching among other abstract descriptions of equivalent phenomena. The advantages of abstraction in mathematics are: It reveals deep connections between different areas of mathematics. Known results in one area can suggest conjectures in another related area. Techniques and methods from one area can be applied to prove results in other related area. Patterns from one mathematical object can be generalized to other similar objects in the same class. The main disadvantage of abstraction is that highly abstract concepts are more difficult to learn, and might require a degree of mathematical maturity and experience before they can be assimilated. In music In music, the term abstraction can be used to describe improvisatory approaches to interpretation, and may sometimes indicate abandonment of tonality. Atonal music has no key signature, and is characterized by the exploration of internal numeric relationships. In neurology A recent meta-analysis suggests that the verbal system has a greater engagement with abstract concepts when the perceptual system is more engaged in processing concrete concepts. This is because abstract concepts elicit greater brain activity in the inferior frontal gyrus and middle temporal gyrus compared to concrete concepts which elicit greater activity in the posterior cingulate, precuneus, fusiform gyrus, and parahippocampal gyrus. Other research into the human brain suggests that the left and right hemispheres differ in their handling of abstraction. For example, one meta-analysis reviewing human brain lesions has shown a left hemisphere bias during tool usage. In philosophy Abstraction in philosophy is the process (or, to some, the alleged process) in concept formation of recognizing some set of common features in individuals, and on that basis forming a concept of that feature. The notion of abstraction is important to understanding some philosophical controversies surrounding empiricism and the problem of universals. It has also recently become popular in formal logic under predicate abstraction. Another philosophical tool for the discussion of abstraction is thought space. John Locke defined abstraction in An Essay Concerning Human Understanding: 'So words are used to stand as outward marks of our internal ideas, which are taken from particular things; but if every particular idea that we take in had its own special name, there would be no end to names. To prevent this, the mind makes particular ideas received from particular things become general; which it does by considering them as they are in the mind—mental appearances—separate from all other existences, and from the circumstances of real existence, such as time, place, and so on. This procedure is called abstraction. In it, an idea taken from a particular thing becomes a general representative of all of the same kind, and its name becomes a general name that is applicable to any existing thing that fits that abstract idea.' (2.11.9) In psychology Carl Jung's definition of abstraction broadened its scope beyond the thinking process to include exactly four mutually exclusive, different complementary psychological functions: sensation, intuition, feeling, and thinking. Together they form a structural totality of the differentiating abstraction process. Abstraction operates in one of these functions when it excludes the simultaneous influence of the other functions and other irrelevancies, such as emotion. Abstraction requires selective use of this structural split of abilities in the psyche. The opposite of abstraction is concretism. Abstraction is one of Jung's 57 definitions in Chapter XI of Psychological Types. In social theory Social theorists deal with abstraction both as an ideational and as a material process. Alfred Sohn-Rethel (1899–1990) asked: "Can there be abstraction other than by thought?" He used the example of commodity abstraction to show that abstraction occurs in practice as people create systems of abstract exchange that extend beyond the immediate physicality of the object and yet have real and immediate consequences. This work was extended through the 'Constitutive Abstraction' approach of writers associated with the Journal Arena. Two books that have taken this theme of the abstraction of social relations as an organizing process in human history are Nation Formation: Towards a Theory of Abstract Community (1996) and an associated volume published in 2006, Globalism, Nationalism, Tribalism: Bringing Theory Back In. These books argue that a nation is an abstract community bringing together strangers who will never meet as such; thus constituting materially real and substantial, but abstracted and mediated relations. The books suggest that contemporary processes of globalization and mediatization have contributed to materially abstracting relations between people, with major consequences for how humans live their lives. One can readily argue that abstraction is an elementary methodological tool in several disciplines of social science. These disciplines have definite and different concepts of "man" that highlight those aspects of man and his behaviour by idealization that are relevant for the given human science. For example, is the man as sociology abstracts and idealizes it, depicting man as a social being. Moreover, we could talk about (the man who can extend his biologically determined intelligence thanks to new technologies), or (who is simply creative). Abstraction (combined with Weberian idealization) plays a crucial role in economics - hence abstractions such as "the market" and the generalized concept of "business". Breaking away from directly experienced reality was a common trend in 19th-century sciences (especially physics), and this was the effort which fundamentally determined the way economics tried (and still tries) to approach the economic aspects of social life. It is abstraction we meet in the case of both Newton's physics and the neoclassical theory, since the goal was to grasp the unchangeable and timeless essence of phenomena. For example, Newton created the concept of the material point by following the abstraction method so that he abstracted from the dimension and shape of any perceptible object, preserving only inertial and translational motion. Material point is the ultimate and common feature of all bodies. Neoclassical economists created the indefinitely abstract notion of homo economicus by following the same procedure. Economists abstract from all individual and personal qualities in order to get to those characteristics that embody the essence of economic activity. Eventually, it is the substance of the economic man that they try to grasp. Any characteristic beyond it only disturbs the functioning of this essential core. See also References Citations Sources Sohn-Rethel, Alfred (1977) Intellectual and manual labour: A critique of epistemology, Humanities Press. . Further reading . External links Internet Encyclopedia of Philosophy: Gottlob Frege Discussion at The Well concerning Abstraction hierarchy Concepts in epistemology Concepts in metaphilosophy Concepts in metaphysics Thought
0.770371
0.999516
0.769998
Developmental systems theory
Developmental systems theory (DST) is an overarching theoretical perspective on biological development, heredity, and evolution. It emphasizes the shared contributions of genes, environment, and epigenetic factors on developmental processes. DST, unlike conventional scientific theories, is not directly used to help make predictions for testing experimental results; instead, it is seen as a collection of philosophical, psychological, and scientific models of development and evolution. As a whole, these models argue the inadequacy of the modern evolutionary synthesis on the roles of genes and natural selection as the principal explanation of living structures. Developmental systems theory embraces a large range of positions that expand biological explanations of organismal development and hold modern evolutionary theory as a misconception of the nature of living processes. Overview All versions of developmental systems theory espouse the view that: All biological processes (including both evolution and development) operate by continually assembling new structures. Each such structure transcends the structures from which it arose and has its own systematic characteristics, information, functions and laws. Conversely, each such structure is ultimately irreducible to any lower (or higher) level of structure, and can be described and explained only on its own terms. Furthermore, the major processes through which life as a whole operates, including evolution, heredity and the development of particular organisms, can only be accounted for by incorporating many more layers of structure and process than the conventional concepts of ‘gene’ and ‘environment’ normally allow for. In other words, although it does not claim that all structures are equal, development systems theory is fundamentally opposed to reductionism of all kinds. In short, developmental systems theory intends to formulate a perspective which does not presume the causal (or ontological) priority of any particular entity and thereby maintains an explanatory openness on all empirical fronts. For example, there is vigorous resistance to the widespread assumptions that one can legitimately speak of genes ‘for’ specific phenotypic characters or that adaptation consists of evolution ‘shaping’ the more or less passive species, as opposed to adaptation consisting of organisms actively selecting, defining, shaping and often creating their niches. Developmental systems theory: Topics Six Themes of DST Joint Determination by Multiple Causes: Development is a product of multiple interacting sources. Context Sensitivity and Contingency: Development depends on the current state of the organism. Extended Inheritance: An organism inherits resources from the environment in addition to genes. Development as a process of construction: The organism helps shape its own environment, such as the way a beaver builds a dam to raise the water level to build a lodge. Distributed Control: Idea that no single source of influence has central control over an organism's development. Evolution As Construction: The evolution of an entire developmental system, including whole ecosystems of which given organisms are parts, not just the changes of a particular being or population. A computing metaphor To adopt a computing metaphor, the reductionists (whom developmental systems theory opposes) assume that causal factors can be divided into ‘processes’ and ‘data’, as in the Harvard computer architecture. Data (inputs, resources, content, and so on) is required by all processes, and must often fall within certain limits if the process in question is to have its ‘normal’ outcome. However, the data alone is helpless to create this outcome, while the process may be ‘satisfied’ with a considerable range of alternative data. Developmental systems theory, by contrast, assumes that the process/data distinction is at best misleading and at worst completely false, and that while it may be helpful for very specific pragmatic or theoretical reasons to treat a structure now as a process and now as a datum, there is always a risk (to which reductionists routinely succumb) that this methodological convenience will be promoted into an ontological conclusion. In fact, for the proponents of DST, either all structures are both process and data, depending on context, or even more radically, no structure is either. Fundamental asymmetry For reductionists there is a fundamental asymmetry between different causal factors, whereas for DST such asymmetries can only be justified by specific purposes, and argue that many of the (generally unspoken) purposes to which such (generally exaggerated) asymmetries have been put are scientifically illegitimate. Thus, for developmental systems theory, many of the most widely applied, asymmetric and entirely legitimate distinctions biologists draw (between, say, genetic factors that create potential and environmental factors that select outcomes or genetic factors of determination and environmental factors of realisation) obtain their legitimacy from the conceptual clarity and specificity with which they are applied, not from their having tapped a profound and irreducible ontological truth about biological causation. One problem might be solved by reversing the direction of causation correctly identified in another. This parity of treatment is especially important when comparing the evolutionary and developmental explanations for one and the same character of an organism. DST approach One upshot of this approach is that developmental systems theory also argues that what is inherited from generation to generation is a good deal more than simply genes (or even the other items, such as the fertilised zygote, that are also sometimes conceded). As a result, much of the conceptual framework that justifies ‘selfish gene’ models is regarded by developmental systems theory as not merely weak but actually false. Not only are major elements of the environment built and inherited as materially as any gene but active modifications to the environment by the organism (for example, a termite mound or a beaver’s dam) demonstrably become major environmental factors to which future adaptation is addressed. Thus, once termites have begun to build their monumental nests, it is the demands of living in those very nests to which future generations of termite must adapt. This inheritance may take many forms and operate on many scales, with a multiplicity of systems of inheritance complementing the genes. From position and maternal effects on gene expression to epigenetic inheritance to the active construction and intergenerational transmission of enduring niches, development systems theory argues that not only inheritance but evolution as a whole can be understood only by taking into account a far wider range of ‘reproducers’ or ‘inheritance systems’ – genetic, epigenetic, behavioural and symbolic – than neo-Darwinism’s ‘atomic’ genes and gene-like ‘replicators’. DST regards every level of biological structure as susceptible to influence from all the structures by which they are surrounded, be it from above, below, or any other direction – a proposition that throws into question some of (popular and professional) biology’s most central and celebrated claims, not least the ‘central dogma’ of Mendelian genetics, any direct determination of phenotype by genotype, and the very notion that any aspect of biological (or psychological, or any other higher form) activity or experience is capable of direct or exhaustive genetic or evolutionary ‘explanation’. Developmental systems theory is plainly radically incompatible with both neo-Darwinism and information processing theory. Whereas neo-Darwinism defines evolution in terms of changes in gene distribution, the possibility that an evolutionarily significant change may arise and be sustained without any directly corresponding change in gene frequencies is an elementary assumption of developmental systems theory, just as neo-Darwinism’s ‘explanation’ of phenomena in terms of reproductive fitness is regarded as fundamentally shallow. Even the widespread mechanistic equation of ‘gene’ with a specific DNA sequence has been thrown into question, as have the analogous interpretations of evolution and adaptation. Likewise, the wholly generic, functional and anti-developmental models offered by information processing theory are comprehensively challenged by DST’s evidence that nothing is explained without an explicit structural and developmental analysis on the appropriate levels. As a result, what qualifies as ‘information’ depends wholly on the content and context out of which that information arises, within which it is translated and to which it is applied. Criticism Philosopher Neven Sesardić, while not dismissive of developmental systems theory, argues that its proponents forget that the role between levels of interaction is ultimately an empirical issue, which cannot be settled by a priori speculation; Sesardić observes that while the emergence of lung cancer is a highly complicated process involving the combined action of many factors and interactions, it is not unreasonable to believe that smoking has an effect on developing lung cancer. Therefore, though developmental processes are highly interactive, context dependent, and extremely complex, it is incorrect to conclude main effects of heredity and environment are unlikely to be found in the "messiness". Sesardić argues that the idea that changing the effect of one factor always depends on what is happening in other factors is an empirical claim, as well as a false one; for example, the bacterium Bacillus thuringiensis produces a protein that is toxic to caterpillars. Genes from this bacterium have been placed into plants vulnerable to caterpillars and the insects proceed to die when they eat part of the plant, as they consume the toxic protein. Thus, developmental approaches must be assessed on a case by case basis and in Sesardić's view, DST does not offer much if only posed in general terms. Hereditarian Psychologist Linda Gottfredson differentiates the "fallacy of so–called "interactionism"" from the technical use of gene-environment interaction to denote a non–additive environmental effect conditioned upon genotype. “Interactionism's” over–generalization cannot render attempts to identify genetic and environmental contributions meaningless. Where behavioural genetics attempts to determine portions of variation accounted for by genetics, environmental–developmentalistics like DST attempt to determine the typical course of human development and erroneously conclude the common theme is readily changed. Another Sesardić argument counters another DST claim of impossibility of determining contribution of trait influence (genetic vs. environment). It necessarily follows a trait cannot be causally attributed to environment as genes and environment are inseparable in DST. Yet DST, critical of genetic heritability, advocates developmentalist research of environmental effects, a logical inconsistency. Barnes et al., made similar criticisms observing that the innate human capacity for language (deeply genetic) does not determine the specific language spoken (a contextually environmental effect). It is then, in principle, possible to separate the effects of genes and environment. Similarly, Steven Pinker argues if genes and environment couldn't actually be separated then speakers have a deterministic genetic disposition to learn a specific native language upon exposure. Though seemingly consistent with the idea of gene–environment interaction, Pinker argues it is nonetheless an absurd position since empirical evidence shows ancestry has no effect on language acquisition — environmental effects are often separable from genetic ones. Related theories Developmental systems theory is not a narrowly defined collection of ideas, and the boundaries with neighbouring models are porous. Notable related ideas (with key texts) include: The Baldwin effect Evolutionary developmental biology Neural Darwinism Probabilistic epigenesis Relational developmental systems See also Systems theory Complex adaptive system Developmental psychobiology The Dialectical Biologist - a 1985 book by Richard Levins and Richard Lewontin which describe a related approach. Living systems References Bibliography Reprinted as: Dawkins, R. (1976). The Selfish Gene. New York: Oxford University Press. Dawkins, R. (1982). The Extended Phenotype. Oxford: Oxford University Press. Oyama, S. (1985). The Ontogeny of Information: Developmental Systems and Evolution. Durham, N.C.: Duke University Press. Edelman, G.M. (1987). Neural Darwinism: Theory of Neuronal Group Selection. New York: Basic Books. Edelman, G.M. and Tononi, G. (2001). Consciousness. How Mind Becomes Imagination. London: Penguin. Goodwin, B.C. (1995). How the Leopard Changed its Spots. London: Orion. Goodwin, B.C. and Saunders, P. (1992). Theoretical Biology. Epigenetic and Evolutionary Order from Complex Systems. Baltimore: Johns Hopkins University Press. Jablonka, E., and Lamb, M.J. (1995). Epigenetic Inheritance and Evolution. The Lamarckian Dimension. London: Oxford University Press. Kauffman, S.A. (1993). The Origins of Order: Self-Organization and Selection in Evolution. Oxford: Oxford University Press. Levins, R. and Lewontin, R. (1985). The Dialectical Biologist. London: Harvard University Press. Neumann-Held, E.M. (1999). The gene is dead- long live the gene. Conceptualizing genes the constructionist way. In P. Koslowski (ed.). Sociobiology and Bioeconomics: The Theory of Evolution in Economic and Biological Thinking, pp. 105–137. Berlin: Springer. Waddington, C.H. (1957). The Strategy of the Genes. London: Allen and Unwin. Further reading Depew, D.J. and Weber, B.H. (1995). Darwinism Evolving. System Dynamics and the Genealogy of Natural Selection. Cambridge, Massachusetts: MIT Press. Eigen, M. (1992). Steps Towards Life. Oxford: Oxford University Press. Gray, R.D. (2000). Selfish genes or developmental systems? In Singh, R.S., Krimbas, C.B., Paul, D.B., and Beatty, J. (2000). Thinking about Evolution: Historical, Philosophical, and Political Perspectives. Cambridge University Press: Cambridge. (184-207). Koestler, A., and Smythies, J.R. (1969). Beyond Reductionism. London: Hutchinson. Lehrman, D.S. (1953). A critique of Konrad Lorenz’s theory of instinctive behaviour. Quarterly Review of Biology 28: 337-363. Thelen, E. and Smith, L.B. (1994). A Dynamic Systems Approach to the Development of Cognition and Action. Cambridge, Massachusetts: MIT Press. External links William Bechtel, Developmental Systems Theory and Beyond presentation, winter 2006. Biological systems Systems theory Evolutionary biology
0.799543
0.963026
0.769981