title
stringlengths 3
49
| text
stringlengths 509
117k
| relevans
float64 0.76
0.83
| popularity
float64 0.95
1
| ranking
float64 0.76
0.82
|
---|---|---|---|---|
Van 't Hoff equation | The Van 't Hoff equation relates the change in the equilibrium constant, , of a chemical reaction to the change in temperature, T, given the standard enthalpy change, , for the process. The subscript means "reaction" and the superscript means "standard". It was proposed by Dutch chemist Jacobus Henricus van 't Hoff in 1884 in his book Études de Dynamique chimique (Studies in Dynamic Chemistry).
The Van 't Hoff equation has been widely utilized to explore the changes in state functions in a thermodynamic system. The Van 't Hoff plot, which is derived from this equation, is especially effective in estimating the change in enthalpy and entropy of a chemical reaction.
Equation
Summary and uses
The standard pressure, , is used to define the reference state for the Van 't Hoff equation, which is
where denotes the natural logarithm, is the thermodynamic equilibrium constant, and is the ideal gas constant. This equation is exact at any one temperature and all pressures, derived from the requirement that the Gibbs free energy of reaction be stationary in a state of chemical equilibrium.
In practice, the equation is often integrated between two temperatures under the assumption that the standard reaction enthalpy is constant (and furthermore, this is also often assumed to be equal to its value at standard temperature). Since in reality and the standard reaction entropy do vary with temperature for most processes, the integrated equation is only approximate. Approximations are also made in practice to the activity coefficients within the equilibrium constant.
A major use of the integrated equation is to estimate a new equilibrium constant at a new absolute temperature assuming a constant standard enthalpy change over the temperature range. To obtain the integrated equation, it is convenient to first rewrite the Van 't Hoff equation as
The definite integral between temperatures and is then
In this equation is the equilibrium constant at absolute temperature , and is the equilibrium constant at absolute temperature .
Development from thermodynamics
Combining the well-known formula for the Gibbs free energy of reaction
where is the entropy of the system, with the Gibbs free energy isotherm equation:
we obtain
Differentiation of this expression with respect to the variable while assuming that both and are independent of yields the Van 't Hoff equation. These assumptions are expected to break down somewhat for large temperature variations.
Provided that and are constant, the preceding equation gives as a linear function of and hence is known as the linear form of the Van 't Hoff equation. Therefore, when the range in temperature is small enough that the standard reaction enthalpy and reaction entropy are essentially constant, a plot of the natural logarithm of the equilibrium constant versus the reciprocal temperature gives a straight line. The slope of the line may be multiplied by the gas constant to obtain the standard enthalpy change of the reaction, and the intercept may be multiplied by to obtain the standard entropy change.
Van 't Hoff isotherm
The Van 't Hoff isotherm can be used to determine the temperature dependence of the Gibbs free energy of reaction for non-standard state reactions at a constant temperature:
where is the Gibbs free energy of reaction under non-standard states at temperature , is the Gibbs free energy for the reaction at , is the extent of reaction, and is the thermodynamic reaction quotient. Since , the temperature dependence of both terms can be described by Van t'Hoff equations as a function of T. This finds applications in the field of electrochemistry. particularly in the study of the temperature dependence of voltaic cells.
The isotherm can also be used at fixed temperature to describe the Law of Mass Action. When a reaction is at equilibrium, and . Otherwise, the Van 't Hoff isotherm predicts the direction that the system must shift in order to achieve equilibrium; when , the reaction moves in the forward direction, whereas when , the reaction moves in the backwards direction. See Chemical equilibrium.
Van 't Hoff plot
For a reversible reaction, the equilibrium constant can be measured at a variety of temperatures. This data can be plotted on a graph with on the -axis and on the axis. The data should have a linear relationship, the equation for which can be found by fitting the data using the linear form of the Van 't Hoff equation
This graph is called the "Van 't Hoff plot" and is widely used to estimate the enthalpy and entropy of a chemical reaction. From this plot, is the slope, and is the intercept of the linear fit.
By measuring the equilibrium constant, , at different temperatures, the Van 't Hoff plot can be used to assess a reaction when temperature changes. Knowing the slope and intercept from the Van 't Hoff plot, the enthalpy and entropy of a reaction can be easily obtained using
The Van 't Hoff plot can be used to quickly determine the enthalpy of a chemical reaction both qualitatively and quantitatively. This change in enthalpy can be positive or negative, leading to two major forms of the Van 't Hoff plot.
Endothermic reactions
For an endothermic reaction, heat is absorbed, making the net enthalpy change positive. Thus, according to the definition of the slope:
When the reaction is endothermic, (and the gas constant ), so
Thus, for an endothermic reaction, the Van 't Hoff plot should always have a negative slope.
Exothermic reactions
For an exothermic reaction, heat is released, making the net enthalpy change negative. Thus, according to the definition of the slope:
For an exothermic reaction , so
Thus, for an exothermic reaction, the Van 't Hoff plot should always have a positive slope.
Error propagation
At first glance, using the fact that it would appear that two measurements of would suffice to be able to obtain an accurate value of :
where and are the equilibrium constant values obtained at temperatures and respectively. However, the precision of values obtained in this way is highly dependent on the precision of the measured equilibrium constant values.
The use of error propagation shows that the error in will be about 76 kJ/mol times the experimental uncertainty in , or about 110 kJ/mol times the uncertainty in the values. Similar considerations apply to the entropy of reaction obtained from .
Notably, when equilibrium constants are measured at three or more temperatures, values of and are often obtained by straight-line fitting. The expectation is that the error will be reduced by this procedure, although the assumption that the enthalpy and entropy of reaction are constant may or may not prove to be correct. If there is significant temperature dependence in either or both quantities, it should manifest itself in nonlinear behavior in the Van 't Hoff plot; however, more than three data points would presumably be needed in order to observe this.
Applications of the Van 't Hoff plot
Van 't Hoff analysis
In biological research, the Van 't Hoff plot is also called Van 't Hoff analysis. It is most effective in determining the favored product in a reaction. It may obtain results different from direct calorimetry such as differential scanning calorimetry or isothermal titration calorimetry due to various effects other than experimental error.
Assume two products B and C form in a reaction:
a A + d D → b B,
a A + d D → c C.
In this case, can be defined as ratio of B to C rather than the equilibrium constant.
When > 1, B is the favored product, and the data on the Van 't Hoff plot will be in the positive region.
When < 1, C is the favored product, and the data on the Van 't Hoff plot will be in the negative region.
Using this information, a Van 't Hoff analysis can help determine the most suitable temperature for a favored product.
In 2010, a Van 't Hoff analysis was used to determine whether water preferentially forms a hydrogen bond with the C-terminus or the N-terminus of the amino acid proline. The equilibrium constant for each reaction was found at a variety of temperatures, and a Van 't Hoff plot was created. This analysis showed that enthalpically, the water preferred to hydrogen bond to the C-terminus, but entropically it was more favorable to hydrogen bond with the N-terminus. Specifically, they found that C-terminus hydrogen bonding was favored by 4.2–6.4 kJ/mol. The N-terminus hydrogen bonding was favored by 31–43 J/(K mol).
This data alone could not conclude which site water will preferentially hydrogen-bond to, so additional experiments were used. It was determined that at lower temperatures, the enthalpically favored species, the water hydrogen-bonded to the C-terminus, was preferred. At higher temperatures, the entropically favored species, the water hydrogen-bonded to the N-terminus, was preferred.
Mechanistic studies
A chemical reaction may undergo different reaction mechanisms at different temperatures.
In this case, a Van 't Hoff plot with two or more linear fits may be exploited. Each linear fit has a different slope and intercept, which indicates different changes in enthalpy and entropy for each distinct mechanisms. The Van 't Hoff plot can be used to find the enthalpy and entropy change for each mechanism and the favored mechanism under different temperatures.
In the example figure, the reaction undergoes mechanism 1 at high temperature and mechanism 2 at low temperature.
Temperature dependence
If the enthalpy and entropy are roughly constant as temperature varies over a certain range, then the Van 't Hoff plot is approximately linear when plotted over that range. However, in some cases the enthalpy and entropy do change dramatically with temperature. A first-order approximation is to assume that the two different reaction products have different heat capacities. Incorporating this assumption yields an additional term in the expression for the equilibrium constant as a function of temperature. A polynomial fit can then be used to analyze data that exhibits a non-constant standard enthalpy of reaction:
where
Thus, the enthalpy and entropy of a reaction can still be determined at specific temperatures even when a temperature dependence exists.
Surfactant self-assembly
The Van 't Hoff relation is particularly useful for the determination of the micellization enthalpy of surfactants from the temperature dependence of the critical micelle concentration (CMC):
However, the relation loses its validity when the aggregation number is also temperature-dependent, and the following relation should be used instead:
with and being the free energies of the surfactant in a micelle with aggregation number and respectively. This effect is particularly relevant for nonionic ethoxylated surfactants or polyoxypropylene–polyoxyethylene block copolymers (Poloxamers, Pluronics, Synperonics). The extended equation can be exploited for the extraction of aggregation numbers of self-assembled micelles from differential scanning calorimetric thermograms.
See also
Clausius–Clapeyron relation
Van 't Hoff factor
Gibbs–Helmholtz equation
Solubility equilibrium
Arrhenius equation
References
Equilibrium chemistry
Eponymous equations of physics
Thermochemistry
Jacobus Henricus van 't Hoff | 0.778391 | 0.994846 | 0.77438 |
Relativism | Relativism is a family of philosophical views which deny claims to objectivity within a particular domain and assert that valuations in that domain are relative to the perspective of an observer or the context in which they are assessed. There are many different forms of relativism, with a great deal of variation in scope and differing degrees of controversy among them. Moral relativism encompasses the differences in moral judgments among people and cultures. Epistemic relativism holds that there are no absolute principles regarding normative belief, justification, or rationality, and that there are only relative ones. Alethic relativism (also factual relativism) is the doctrine that there are no absolute truths, i.e., that truth is always relative to some particular frame of reference, such as a language or a culture (cultural relativism). Some forms of relativism also bear a resemblance to philosophical skepticism. Descriptive relativism seeks to describe the differences among cultures and people without evaluation, while normative relativism evaluates the word truthfulness of views within a given framework.
Forms of relativism
Anthropological versus philosophical relativism
Anthropological relativism refers to a methodological stance, in which the researcher suspends (or brackets) their own cultural prejudice while trying to understand beliefs or behaviors in their contexts. This has become known as methodological relativism, and concerns itself specifically with avoiding ethnocentrism or the application of one's own cultural standards to the assessment of other cultures. This is also the basis of the so-called "emic" and "etic" distinction, in which:
An emic or insider account of behavior is a description of a society in terms that are meaningful to the participant or actor's own culture; an emic account is therefore culture-specific, and typically refers to what is considered "common sense" within the culture under observation.
An etic or outsider account is a description of a society by an observer, in terms that can be applied to other cultures; that is, an etic account is culturally neutral, and typically refers to the conceptual framework of the social scientist. (This is complicated when it is scientific research itself that is under study, or when there is theoretical or terminological disagreement within the social sciences.)
Philosophical relativism, in contrast, asserts that the truth of a proposition depends on the metaphysical, or theoretical frame, or the instrumental method, or the context in which the proposition is expressed, or on the person, groups, or culture who interpret the proposition.
Methodological relativism and philosophical relativism can exist independently from one another, but most anthropologists base their methodological relativism on that of the philosophical variety.
Descriptive versus normative relativism
The concept of relativism also has importance both for philosophers and for anthropologists in another way. In general, anthropologists engage in descriptive relativism ("how things are" or "how things seem"), whereas philosophers engage in normative relativism ("how things ought to be"), although there is some overlap (for example, descriptive relativism can pertain to concepts, normative relativism to truth).
Descriptive relativism assumes that certain cultural groups have different modes of thought, standards of reasoning, and so forth, and it is the anthropologist's task to describe, but not to evaluate the validity of these principles and practices of a cultural group. It is possible for an anthropologist in his or her fieldwork to be a descriptive relativist about some things that typically concern the philosopher (e.g., ethical principles) but not about others (e.g., logical principles). However, the descriptive relativist's empirical claims about epistemic principles, moral ideals and the like are often countered by anthropological arguments that such things are universal, and much of the recent literature on these matters is explicitly concerned with the extent of, and evidence for, cultural or moral or linguistic or human universals.
The fact that the various species of descriptive relativism are empirical claims may tempt the philosopher to conclude that they are of little philosophical interest, but there are several reasons why this is not so. First, some philosophers, notably Kant, argue that certain sorts of cognitive differences between human beings (or even all rational beings) are impossible, so such differences could never be found to obtain in fact, an argument that places a priori limits on what empirical inquiry could discover and on what versions of descriptive relativism could be true. Second, claims about actual differences between groups play a central role in some arguments for normative relativism (for example, arguments for normative ethical relativism often begin with claims that different groups in fact have different moral codes or ideals). Finally, the anthropologist's descriptive account of relativism helps to separate the fixed aspects of human nature from those that can vary, and so a descriptive claim that some important aspect of experience or thought does (or does not) vary across groups of human beings tells us something important about human nature and the human condition.
Normative relativism concerns normative or evaluative claims that modes of thought, standards of reasoning, or the like are only right or wrong relative to a framework. 'Normative' is meant in a general sense, applying to a wide range of views; in the case of beliefs, for example, normative correctness equals truth. This does not mean, of course, that framework-relative correctness or truth is always clear, the first challenge being to explain what it amounts to in any given case (e.g., with respect to concepts, truth, epistemic norms). Normative relativism (say, in regard to normative ethical relativism) therefore implies that things (say, ethical claims) are not simply true in themselves, but only have truth values relative to broader frameworks (say, moral codes). (Many normative ethical relativist arguments run from premises about ethics to conclusions that assert the relativity of truth values, bypassing general claims about the nature of truth, but it is often more illuminating to consider the type of relativism under question directly.)
Legal relativism
In English common law, two (perhaps three) separate standards of proof are recognized:
proof based on the balance of probabilities is the lesser standard used in civil litigation, which cases mostly concern money or some other penalty, that, if further and better evidence should emerge, is reasonably reversible.
proof beyond reasonable doubt is used in criminal law cases where an accused's right to personal freedom or survival is in question, because such punishment is not reasonably reversible.
Absolute truth is so complex as to be only capable of being fully understood by the omniscient established during the Tudor period as the one true God
Related and contrasting positions
Relationism is the theory that there are only relations between individual entities, and no intrinsic
properties. Despite the similarity in name, it is held by some to be a position distinct from relativism—for instance, because "statements about relational properties [...] assert an absolute truth about things in the world".
On the other hand, others wish to equate relativism, relationism and even relativity, which is a precise theory of relationships between physical objects: Nevertheless, "This confluence of relativity theory with relativism became a strong contributing factor in the increasing prominence of relativism".
Whereas previous investigations of science only sought sociological or psychological explanations of failed scientific theories or pathological science, the 'strong programme' is more relativistic, assessing scientific truth and falsehood equally in a historic and cultural context.
Criticisms
A common argument against relativism suggests that it inherently refutes itself: the statement "all is relative" classes either as a relative statement or as an absolute one. If it is relative, then this statement does not rule out absolutes. If the statement is absolute, on the other hand, then it provides an example of an absolute statement, proving that not all truths are relative. However, this argument against relativism only applies to relativism that positions truth as relative–i.e. epistemological/truth-value relativism. More specifically, it is only extreme forms of epistemological relativism that can come in for this criticism as there are many epistemological relativists who posit that some aspects of what is regarded as factually "true" are not universal, yet still accept that other universal truths exist (e.g. gas laws or moral laws).
Another argument against relativism posits a Natural Law. Simply put, the physical universe works under basic principles: the "Laws of Nature". Some contend that a natural Moral Law may also exist, for example as argued by, Immanuel Kant in Critique of Practical Reason, Richard Dawkins in The God Delusion (2006) and addressed by C. S. Lewis in Mere Christianity (1952). Dawkins said "I think we face an equal but much more sinister challenge from the left, in the shape of cultural relativism - the view that scientific truth is only one kind of truth and it is not to be especially privileged".
Philosopher Hilary Putnam, among others, states that some forms of relativism make it impossible to believe one is in error. If there is no truth beyond an individual's belief that something is true, then an individual cannot hold their own beliefs to be false or mistaken. A related criticism is that relativizing truth to individuals destroys the distinction between truth and belief.
Views
Philosophical
Ancient
Sophism
Sophists are considered the founding fathers of relativism in Western philosophy. Elements of relativism emerged among the Sophists in the 5th century BC. Notably, it was Protagoras who coined the phrase, "Man is the measure of all things: of things which are, that they are, and of things which are not, that they are not." The thinking of the Sophists is mainly known through their opponent, Plato. In a paraphrase from Plato's dialogue Theaetetus, Protagoras said: "What is true for you is true for you, and what is true for me is true for me."
Modern
Bernard Crick
Bernard Crick, a British political scientist and advocate of relativism, suggested in In Defence of Politics (1962) that moral conflict between people is inevitable. He thought that only ethics can resolve such conflict, and when that occurs in public it results in politics. Accordingly, Crick saw the process of dispute resolution, harms reduction, mediation or peacemaking as central to all of moral philosophy. He became an important influence on feminists and later on the Greens.
Paul Feyerabend
Philosopher of science Paul Feyerabend is often considered to be a relativist, although he denied being one.
Feyerabend argued that modern science suffers from being methodologically monistic (the belief that only a single methodology can produce scientific progress). Feyerabend summarises his case in Against Method with the phrase "anything goes".
In an aphorism [Feyerabend] often repeated, "potentially every culture is all cultures". This is intended to convey that world views are not hermetically closed, since their leading concepts have an "ambiguity" - better, an open-endedness - which enables people from other cultures to engage with them. [...] It follows that relativism, understood as the doctrine that truth is relative to closed systems, can get no purchase. [...] For Feyerabend, both hermetic relativism and its absolutist rival [realism] serve, in their different ways, to "devalue human existence". The former encourages that unsavoury brand of political correctness which takes the refusal to criticise "other cultures" to the extreme of condoning murderous dictatorship and barbaric practices. The latter, especially in its favoured contemporary form of "scientific realism", with the excessive prestige it affords to the abstractions of "the monster 'science'", is in bed with a politics which likewise disdains variety, richness and everyday individuality - a politics which likewise "hides" its norms behind allegedly neutral facts, "blunts choices and imposes laws".
Thomas Kuhn
Thomas Kuhn's philosophy of science, as expressed in The Structure of Scientific Revolutions, is often interpreted as relativistic. He claimed that, as well as progressing steadily and incrementally ("normal science"), science undergoes periodic revolutions or "paradigm shifts", leaving scientists working in different paradigms with difficulty in even communicating. Thus the truth of a claim, or the existence of a posited entity, is relative to the paradigm employed. However, it is not necessary for him to embrace relativism because every paradigm presupposes the prior, building upon itself through history and so on. This leads to there being a fundamental, incremental, and referential structure of development which is not relative but again, fundamental.
From these remarks, one thing is however certain: Kuhn is not saying that incommensurable theories cannot be compared - what they can't be is compared in terms of a system of common measure. He very plainly says that they can be compared, and he reiterates this repeatedly in later work, in a (mostly in vain) effort to avert the crude and sometimes catastrophic misinterpretations he suffered from mainstream philosophers and post-modern relativists alike.
But Kuhn rejected the accusation of being a relativist later in his postscript:
scientific development is ... a unidirectional and irreversible process. Later scientific theories are better than earlier ones for solving puzzles ... That is not a relativist's position, and it displays the sense in which I am a convinced believer in scientific progress.
Some have argued that one can also read Kuhn's work as essentially positivist in its ontology: the revolutions he posits are epistemological, lurching toward a presumably 'better' understanding of an objective reality through the lens presented by the new paradigm. However, a number of passages in Structure do indeed appear to be distinctly relativist, and to directly challenge the notion of an objective reality and the ability of science to progress towards an ever-greater grasp of it, particularly through the process of paradigm change.
In the sciences there need not be progress of another sort. We may, to be more precise, have to relinquish the notion, explicit or implicit, that changes of paradigm carry scientists and those who learn from them closer and closer to the truth.
We are all deeply accustomed to seeing science as the one enterprise that draws constantly nearer to some goal set by nature in advance. But need there be any such goal? Can we not account for both science's existence and its success in terms of evolution from the community's state of knowledge at any given time? Does it really help to imagine that there is some one full, objective, true account of nature and that the proper measure of scientific achievement is the extent to which it brings us closer to that ultimate goal?
George Lakoff and Mark Johnson
George Lakoff and Mark Johnson define relativism in Metaphors We Live By as the rejection of both subjectivism and metaphysical objectivism in order to focus on the relationship between them, i.e. the metaphor by which we relate our current experience to our previous experience. In particular, Lakoff and Johnson characterize "objectivism" as a "straw man", and, to a lesser degree, criticize the views of Karl Popper, Kant and Aristotle.
Robert Nozick
In his book Invariances, Robert Nozick expresses a complex set of theories about the absolute and the relative. He thinks the absolute/relative distinction should be recast in terms of an invariant/variant distinction, where there are many things a proposition can be invariant with regard to or vary with. He thinks it is coherent for truth to be relative, and speculates that it might vary with time. He thinks necessity is an unobtainable notion, but can be approximated by robust invariance across a variety of conditions—although we can never identify a proposition that is invariant with regard to everything. Finally, he is not particularly warm to one of the most famous forms of relativism, moral relativism, preferring an evolutionary account.
Joseph Margolis
Joseph Margolis advocates a view he calls "robust relativism" and defends it in his books Historied Thought, Constructed World, Chapter 4 (California, 1995) and The Truth about Relativism (Blackwell, 1991). He opens his account by stating that our logics should depend on what we take to be the nature of the sphere to which we wish to apply our logics. Holding that there can be no distinctions which are not "privileged" between the alethic, the ontic, and the epistemic, he maintains that a many-valued logic just might be the most apt for aesthetics or history since, because in these practices, we are loath to hold to simple binary logic; and he also holds that many-valued logic is relativistic. (This is perhaps an unusual definition of "relativistic". Compare with his comments on "relationism".) To say that "True" and "False" are mutually exclusive and exhaustive judgements on Hamlet, for instance, really does seem absurd. A many-valued logicwith its values "apt", "reasonable", "likely", and so onseems intuitively more applicable to interpreting Hamlet. Where apparent contradictions arise between such interpretations, we might call the interpretations "incongruent", rather than dubbing either of them "false", because using many-valued logic implies that a measured value is a mixture of two extreme possibilities. Using the subset of many-valued logic, fuzzy logic, it can be said that various interpretations can be represented by membership in more than one possible truth set simultaneously. Fuzzy logic is therefore probably the best mathematical structure for understanding "robust relativism" and has been interpreted by Bart Kosko as philosophically being related to Zen Buddhism.
It was Aristotle who held that relativism implies that we should, sticking with appearances only, end up contradicting ourselves somewhere if we could apply all attributes to all ousiai (beings). Aristotle, however, made non-contradiction dependent upon his essentialism. If his essentialism is false, then so too is his ground for disallowing relativism. (Subsequent philosophers have found other reasons for supporting the principle of non-contradiction.)
Beginning with Protagoras and invoking Charles Sanders Peirce, Margolis shows that the historic struggle to discredit relativism is an attempt to impose an unexamined belief in the world's essentially rigid rule-like nature. Plato and Aristotle merely attacked "relationalism"the doctrine of true for l or true for k, and the like, where l and k are different speakers or different worldsor something similar (most philosophers would call this position "relativism"). For Margolis, "true" means true; that is, the alethic use of "true" remains untouched. However, in real world contexts, and context is ubiquitous in the real world, we must apply truth values. Here, in epistemic terms, we might tout court retire "true" as an evaluation and keep "false". The rest of our value-judgements could be graded from "extremely plausible" down to "false". Judgements which on a bivalent logic would be incompatible or contradictory are further seen as "incongruent", although one may well have more weight than the other. In short, relativistic logic is not, or need not be, the bugbear it is often presented to be. It may simply be the best type of logic to apply to certain very uncertain spheres of real experiences in the world (although some sort of logic needs to be applied in order to make that judgement). Those who swear by bivalent logic might simply be the ultimate keepers of the great fear of the flux.
Richard Rorty
Philosopher Richard Rorty has a somewhat paradoxical role in the debate over relativism: he is criticized for his relativistic views by many commentators, but has always denied that relativism applies to much anybody, being nothing more than a Platonic scarecrow. Rorty claims, rather, that he is a pragmatist, and that to construe pragmatism as relativism is to beg the question.
'"Relativism" is the traditional epithet applied to pragmatism by realists'
'"Relativism" is the view that every belief on a certain topic, or perhaps about any topic, is as good as every other. No one holds this view. Except for the occasional cooperative freshman, one cannot find anybody who says that two incompatible opinions on an important topic are equally good. The philosophers who get called 'relativists' are those who say that the grounds for choosing between such opinions are less algorithmic than had been thought.'
'In short, my strategy for escaping the self-referential difficulties into which "the Relativist" keeps getting himself is to move everything over from epistemology and metaphysics into cultural politics, from claims to knowledge and appeals to self-evidence to suggestions about what we should try.'
Rorty takes a deflationary attitude to truth, believing there is nothing of interest to be said about truth in general, including the contention that it is generally subjective. He also argues that the notion of warrant or justification can do most of the work traditionally assigned to the concept of truth, and that justification is relative; justification is justification to an audience, for Rorty.
In Contingency, Irony, and Solidarity he argues that the debate between so-called relativists and so-called objectivists is beside the point because they do not have enough premises in common for either side to prove anything to the other.
Nalin de Silva
In his book Mage Lokaya (My World), 1986, Nalin de Silva criticized the basis of the established western system of knowledge, and its propagation, which he refers as "domination throughout the world".He explained in this book that mind independent reality is impossible and knowledge is not found but constructed. Further he has introduced and developed the concept of "Constructive Relativism" as the basis on which knowledge is constructed relative to the sense organs, culture and the mind completely based on Avidya.
Colin Murray Turbayne
In his final book Metaphors for the Mind: The Creative Mind and Its Origins (1991), Colin Murray Turbayne joins the debate about relativism and realism by providing an analysis of the manner in which Platonic metaphors which were first presented in the procreation model of the Timaeus dialogue have evolved over time to influence the philosophical works of both George Berkeley and Emmanuel Kant. In addition, he illustrates the manner in which these ancient Greek metaphors have subsequently evolved to impact the development of the theories of "substance" and "attribute", which in turn have dominated the development of human thought and language in the 20th century.
In his The Myth of Metaphor (1962) Turbayne argues that it is perfectly possible to transcend the limitations which are inherent in such metaphors, including those incorporated within the framework of classical "objective" mechanistic Newtonian cosmology and scientific materialism in general. In Turbayne's view, one can strive to embrace a more satisfactory epistemology by first acknowledging the limitations imposed by such metaphorical systems. This can readily be accomplished by restoring Plato's metaphorical model to its original state in which both "male" and "female" aspects of the mind work in concert within the context of a harmonious balance during the process of creation.
Postmodernism
The term "relativism" often comes up in debates over postmodernism, poststructuralism and phenomenology. Critics of these perspectives often identify advocates with the label "relativism". For example, the Sapir–Whorf hypothesis is often considered a relativist view because it posits that linguistic categories and structures shape the way people view the world. Stanley Fish has defended postmodernism and relativism.
These perspectives do not strictly count as relativist in the philosophical sense, because they express agnosticism on the nature of reality and make epistemological rather than ontological claims. Nevertheless, the term is useful to differentiate them from realists who believe that the purpose of philosophy, science, or literary critique is to locate externally true meanings. Important philosophers and theorists such as Michel Foucault, Max Stirner, political movements such as post-anarchism or post-Marxism can also be considered as relativist in this sense - though a better term might be social constructivist.
The spread and popularity of this kind of "soft" relativism varies between academic disciplines. It has wide support in anthropology and has a majority following in cultural studies. It also has advocates in political theory and political science, sociology, and continental philosophy (as distinct from Anglo-American analytical philosophy). It has inspired empirical studies of the social construction of meaning such as those associated with labelling theory, which defenders can point to as evidence of the validity of their theories (albeit risking accusations of performative contradiction in the process). Advocates of this kind of relativism often also claim that recent developments in the natural sciences, such as Heisenberg's uncertainty principle, quantum mechanics, chaos theory and complexity theory show that science is now becoming relativistic. However, many scientists who use these methods continue to identify as realist or post-positivist, and some sharply criticize the association.
Religious
Buddhism
Madhyamaka Buddhism, which forms the basis for many Mahayana Buddhist schools and which was founded by Nāgārjuna. Nāgārjuna taught the idea of relativity. In the Ratnāvalī, he gives the example that shortness exists only in relation to the idea of length. The determination of a thing or object is only possible in relation to other things or objects, especially by way of contrast. He held that the relationship between the ideas of "short" and "long" is not due to intrinsic nature (svabhāva). This idea is also found in the Pali Nikāyas and Chinese Āgamas, in which the idea of relativity is expressed similarly: "That which is the element of light ... is seen to exist on account of [in relation to] darkness; that which is the element of good is seen to exist on account of bad; that which is the element of space is seen to exist on account of form."
Madhyamaka Buddhism discerns two levels of truth: relative and ultimate. The two truths doctrine states that there are Relative or conventional, common-sense truth, which describes our daily experience of a concrete world, and Ultimate truth, which describes the ultimate reality as sunyata, empty of concrete and inherent characteristics. Conventional truth may be understood, in contrast, as "obscurative truth" or "that which obscures the true nature". It is constituted by the appearances of mistaken awareness. Conventional truth would be the appearance that includes a duality of apprehender and apprehended, and objects perceived within that. Ultimate truth is the phenomenal world free from the duality of apprehender and apprehended.
Catholicism
The Catholic Church, especially under John Paul II and Pope Benedict XVI, has identified relativism as one of the most significant problems for faith and morals today.
According to the Church and to some theologians, relativism, as a denial of absolute truth, leads to moral license and a denial of the possibility of sin and of God. Whether moral or epistemological, relativism constitutes a denial of the capacity of the human mind and reason to arrive at truth. Truth, according to Catholic theologians and philosophers (following Aristotle) consists of adequatio rei et intellectus, the correspondence of the mind and reality. Another way of putting it states that the mind has the same form as reality. This means when the form of the computer in front of someone (the type, color, shape, capacity, etc.) is also the form that is in their mind, then what they know is true because their mind corresponds to objective reality.
The denial of an absolute reference, of an axis mundi, denies God, who equates to Absolute Truth, according to these Christian theologians. They link relativism to secularism, an obstruction of religion in human life.
Leo XIII
Pope Leo XIII (1810–1903) was the first known Pope to use the word "relativism", in his encyclical Humanum genus (1884). Leo condemned Freemasonry and claimed that its philosophical and political system was largely based on relativism.
John Paul II
John Paul II wrote in Veritatis Splendor
As is immediately evident, the crisis of truth is not unconnected with this development. Once the idea of a universal truth about the good, knowable by human reason, is lost, inevitably the notion of conscience also changes. Conscience is no longer considered in its primordial reality as an act of a person's intelligence, the function of which is to apply the universal knowledge of the good in a specific situation and thus to express a judgment about the right conduct to be chosen here and now. Instead, there is a tendency to grant to the individual conscience the prerogative of independently determining the criteria of good and evil and then acting accordingly. Such an outlook is quite congenial to an individualist ethic, wherein each individual is faced with his own truth, different from the truth of others. Taken to its extreme consequences, this individualism leads to a denial of the very idea of human nature.
In Evangelium Vitae (The Gospel of Life), he says:
Freedom negates and destroys itself, and becomes a factor leading to the destruction of others, when it no longer recognizes and respects its essential link with the truth. When freedom, out of a desire to emancipate itself from all forms of tradition and authority, shuts out even the most obvious evidence of an objective and universal truth, which is the foundation of personal and social life, then the person ends up by no longer taking as the sole and indisputable point of reference for his own choices the truth about good and evil, but only his subjective and changeable opinion or, indeed, his selfish interest and whim.
Benedict XVI
In April 2005, in his homily during Mass prior to the conclave which would elect him as Pope, then Cardinal Joseph Ratzinger talked about the world "moving towards a dictatorship of relativism":
How many winds of doctrine we have known in recent decades, how many ideological currents, how many ways of thinking. The small boat of thought of many Christians has often been tossed about by these waves – thrown from one extreme to the other: from Marxism to liberalism, even to libertinism; from collectivism to radical individualism; from atheism to a vague religious mysticism; from agnosticism to syncretism, and so forth. Every day new sects are created and what Saint Paul says about human trickery comes true, with cunning which tries to draw those into error (cf Ephesians 4, 14). Having a clear Faith, based on the Creed of the Church, is often labeled today as a fundamentalism. Whereas, relativism, which is letting oneself be tossed and "swept along by every wind of teaching", looks like the only attitude acceptable to today's standards. We are moving towards a dictatorship of relativism which does not recognize anything as certain and which has as its highest goal one's own ego and one's own desires. However, we have a different goal: the Son of God, true man. He is the measure of true humanism. Being an "Adult" means having a faith which does not follow the waves of today's fashions or the latest novelties. A faith which is deeply rooted in friendship with Christ is adult and mature. It is this friendship which opens us up to all that is good and gives us the knowledge to judge true from false, and deceit from truth.
On June 6, 2005, Pope Benedict XVI told educators:
Today, a particularly insidious obstacle to the task of education is the massive presence in our society and culture of that relativism which, recognizing nothing as definitive, leaves as the ultimate criterion only the self with its desires. And under the semblance of freedom it becomes a prison for each one, for it separates people from one another, locking each person into his or her own 'ego'.
Then during the World Youth Day in August 2005, he also traced to relativism the problems produced by the communist and sexual revolutions, and provided a counter-counter argument.
In the last century we experienced revolutions with a common programme–expecting nothing more from God, they assumed total responsibility for the cause of the world in order to change it. And this, as we saw, meant that a human and partial point of view was always taken as an absolute guiding principle. Absolutizing what is not absolute but relative is called totalitarianism. It does not liberate man, but takes away his dignity and enslaves him. It is not ideologies that save the world, but only a return to the living God, our Creator, the Guarantor of our freedom, the Guarantor of what is really good and true.
Pope Francis
Pope Francis refers in Evangelii gaudium to two forms of relativism, "doctrinal relativism" and a "practical relativism" typical of "our age". The latter is allied to "widespread indifference" to systems of belief.
Jainism
Mahavira (599-527 BC), the 24th Tirthankara of Jainism, developed a philosophy known as Anekantavada. John Koller describes anekāntavāda as "epistemological respect for view of others" about the nature of existence, whether it is "inherently enduring or constantly changing", but "not relativism; it does not mean conceding that all arguments and all views are equal".
Sikhism
In Sikhism the Gurus (spiritual teachers) have propagated the message of "many paths" leading to the one God and ultimate salvation for all souls who tread on the path of righteousness. They have supported the view that proponents of all faiths can, by doing good and virtuous deeds and by remembering the Lord, certainly achieve salvation. The students of the Sikh faith are told to accept all leading faiths as possible vehicles for attaining spiritual enlightenment provided the faithful study, ponder and practice the teachings of their prophets and leaders. The holy book of the Sikhs called the Sri Guru Granth Sahib says: "Do not say that the Vedas, the Bible and the Koran are false. Those who do not contemplate them are false." Guru Granth Sahib page 1350; later stating: "The seconds, minutes, and hours, days, weeks and months, and the various seasons originate from the one Sun; O nanak, in just the same way, the many forms originate from the Creator." Guru Granth Sahib page 12,13.
See also
References
Bibliography
Maria Baghramian, Relativism, London: Routledge, 2004,
Gad Barzilai, Communities and Law: Politics and Cultures of Legal Identities, Ann Arbor: University of Michigan Press, 2003,
Andrew Lionel Blais, On the Plurality of Actual Worlds, University of Massachusetts Press, 1997,
Benjamin Brown, Thoughts and Ways of Thinking: Source Theory and Its Applications. London: Ubiquity Press, 2017. .
Ernest Gellner, Relativism and the Social Sciences, Cambridge: Cambridge University Press, 1985,
Rom Harré and Michael Krausz, Varieties of Relativism, Oxford, UK; New York, NY: Blackwell, 1996,
Knight, Robert H. The Age of Consent: the Rise of Relativism and the Corruption of Popular Culture. Dallas, Tex.: Spence Publishing Co., 1998. xxiv, 253, [1] p.
Michael Krausz, ed., Relativism: A Contemporary Anthology, New York: Columbia University Press, 2010,
Martin Hollis, Steven Lukes, Rationality and Relativism, Oxford: Basil Blackwell, 1982,
Joseph Margolis, Michael Krausz, R. M. Burian, Eds., Rationality, Relativism, and the Human Sciences, Dordrecht: Boston, M. Nijhoff, 1986,
Jack W. Meiland, Michael Krausz, Eds. Relativism, Cognitive and Moral, Notre Dame: University of Notre Dame Press, 1982,
Markus Seidel, Epistemic Relativism: A Constructive Critique, Basingstoke: Palgrave Macmillan, 2014,
External links
Westacott, E. Relativism, 2005, Internet Encyclopedia of Philosophy
Westacott, E. Cognitive Relativism, 2006, Internet Encyclopedia of Philosophy
Professor Ronald Jones on relativism
What 'Being Relative' Means, a passage from Pierre Lecomte du Nouy's "Human Destiny" (1947)
BBC Radio 4 series "In Our Time", on Relativism - the battle against transcendent knowledge, 19 January 2006
Against Relativism, by Christopher Noriss
The Catholic Encyclopedia
Harvey Siegel reviews Paul Boghossian's Fear of Knowledge
Epistemological schools and traditions | 0.777597 | 0.995858 | 0.774377 |
De novo synthesis | In chemistry, de novo synthesis is the synthesis of complex molecules from simple molecules such as sugars or amino acids, as opposed to recycling after partial degradation. For example, nucleotides are not needed in the diet as they can be constructed from small precursor molecules such as formate and aspartate. Methionine, on the other hand, is needed in the diet because while it can be degraded to and then regenerated from homocysteine, it cannot be synthesized de novo.
Nucleotide
De novo pathways of nucleotides do not use free bases: adenine (abbreviated as A), guanine (G), cytosine (C), thymine (T), or uracil (U). The purine ring is built up one atom or a few atoms at a time and attached to ribose throughout the process. Pyrimidine ring is synthesized as orotate and attached to ribose phosphate and later converted to common pyrimidine nucleotides.
Cholesterol
Cholesterol is an essential structural component of animal cell membranes. Cholesterol also serves as a precursor for the biosynthesis of steroid hormones, bile acid and vitamin D. In mammals cholesterol is either absorbed from dietary sources or is synthesized de novo. Up to 70-80% of de novo cholesterol synthesis occurs in the liver, and about 10% of de novo cholesterol synthesis occurs in the small intestine. Cancer cells require cholesterol for cell membranes, so cancer cells contain many enzymes for de novo cholesterol synthesis from acetyl-CoA.
Fatty-acid (de novo lipogenesis)
De novo lipogenesis (DNL) is the process by which excess carbohydrates from the circulation are converted into fatty acids, which can be further converted into triglycerides or other lipids. Acetate and some amino acids (notably leucine and isoleucine) can also be carbon sources for DNL.
Normally, de novo lipogenesis occurs primarily in adipose tissue. But in conditions of obesity, insulin resistance, or type 2 diabetes de novo lipogenesis is reduced in adipose tissue (where carbohydrate-responsive element-binding protein (ChREBP) is the major transcription factor) and is increased in the liver (where sterol regulatory element-binding protein 1 (SREBP-1c) is the major transcription factor). ChREBP is normally activated in the liver by glucose (independent of insulin). Obesity and high-fat diets cause levels of carbohydrate-responsive element-binding protein in adipose tissue to be reduced. By contrast, high blood levels of insulin, due to a high carbohydrate meal or insulin resistance, strongly induces SREBP-1c expression in the liver. The reduction of adipose tissue de novo lipogenesis, and the increase in liver de novo lipogenesis due to obesity and insulin resistance leads to fatty liver disease.
Fructose consumption (in contrast to glucose) activates both SREBP-1c and ChREBP in an insulin independent manner. Although glucose can be converted into glycogen in the liver, fructose invariably increases de novo lipogenesis in the liver, elevating plasma triglycerides, more than glucose. Moreover, when equal amounts of glucose or fructose sweetened beverages are consumed, the fructose beverage not only causes a greater increase in plasma triglycerides, but causes a greater increase in abdominal fat.
DNL is elevated in non-alcoholic fatty liver disease (NAFLD), and is a hallmark of the disease. Compared with healthy controls, patients with NAFLD have an average 3.5 -fold increase in DNL.
De novo fatty-acid synthesis is regulated by two important enzymes, namely acetyl-CoA carboxylase and fatty acid synthase. The enzyme acetyl CoA carboxylase is responsible for introducing a carboxyl group to acetyl CoA, rendering malonyl-CoA. Then, the enzyme fatty-acid synthase is responsible for turning malonlyl-CoA into fatty-acid chain. De novo fatty-acid synthesis is mainly not active in human cells, since diet is the major source for it. Thus, it is considered to be a minor contributor to the serum lipid homeostasis. In mice, FA de novo synthesis increases in WAT with the exposure to cold temperatures which might be important for maintenance of circulating TAG levels in the blood stream, and to supply FA for thermogenesis during prolonged cold exposures.
DNA
De novo DNA synthesis refers to the synthetic creation of DNA rather than assembly or modification of natural precursor template DNA sequences. Initial oligonucleotide synthesis is followed by artificial gene synthesis, and finally by a process of cloning, error correction, and verification, which often involves cloning the genes into plasmids into Escherichia coli or yeast.
Primase is an RNA polymerase, and it can add a primer to an existing strand awaiting replication. DNA polymerase cannot add primers, and therefore, needs primase to add the primer de novo.
References
Further reading
Harper's Illustrated Biochemistry, 26th Ed - Robert K. Murray, Darryl K. Granner, Peter A. Mayes, Victor W. Rodwell
Lehninger Principles of Biochemistry, Fourth Edition - David L. Nelson, Michael M. Cox
Biochemistry 5th ed - Jeremy M. Berg, John L. Tymoczko, Lubert Stryer
Biochemistry- Garrett.and.Grisham.2nd.ed
Biochemistry, 2/e by Reiginald and Charles Grisham
Biochemistry for dummies by John T Moore, EdD and Richard Langley, PhD
Stryer L (2007). Biochemistry. 6th Edition. WH Freeman and Company. New York. USA
External links
Purine and pyrimidine metabolism
De novo synthesis of purine nucleotides
Cell biology
Latin biological phrases | 0.78506 | 0.986206 | 0.774231 |
Physical property | A physical property is any property of a physical system that is measurable. The changes in the physical properties of a system can be used to describe its changes between momentary states. A quantifiable physical property is called physical quantity. Measurable physical quantities are often referred to as observables.
Some physical properties are qualitative, such as shininess, brittleness, etc.; some general qualitative properties admit more specific related quantitative properties, such as in opacity, hardness, ductility, viscosity, etc.
Physical properties are often characterized as intensive and extensive properties. An intensive property does not depend on the size or extent of the system, nor on the amount of matter in the object, while an extensive property shows an additive relationship. These
classifications are in general only valid in cases when smaller subdivisions of the sample do not interact in some physical or chemical process when combined.
Properties may also be classified with respect to the directionality of their nature. For example, isotropic properties do not change with the direction of observation, and anisotropic properties do have spatial variance.
It may be difficult to determine whether a given property is a material property or not. Color, for example, can be seen and measured; however, what one perceives as color is really an interpretation of the reflective properties of a surface and the light used to illuminate it. In this sense, many ostensibly physical properties are called supervenient. A supervenient property is one which is actual, but is secondary to some underlying reality. This is similar to the way in which objects are supervenient on atomic structure. A cup might have the physical properties of mass, shape, color, temperature, etc., but these properties are supervenient on the underlying atomic structure, which may in turn be supervenient on an underlying quantum structure.
Physical properties are contrasted with chemical properties which determine the way a material behaves in a chemical reaction.
List of properties
The physical properties of an object that are traditionally defined by classical mechanics are often called mechanical properties. Other broad categories, commonly cited, are electrical properties, optical properties, thermal properties, etc. Examples of physical properties include:
absorption (physical)
absorption (electromagnetic)
albedo
angular momentum
area
brittleness
boiling point
capacitance
color
concentration
density
dielectric
ductility
distribution
efficacy
elasticity
electric charge
electrical conductivity
electrical impedance
electric field
electric potential
emission
flow rate (mass)
flow rate (volume)
fluidity
frequency
hardness
heat capacity
inductance
intrinsic impedance
intensity
irradiance
length
location
luminance
luminescence
luster
malleability
magnetic field
magnetic flux
mass
melting point
moment
momentum
opacity
permeability
permittivity
plasticity
pressure
radiance
resistivity
reflectivity
refractive index
spin
solubility
specific heat
strength
stiffness
temperature
tension
thermal conductivity (and resistance)
velocity
viscosity
volume
wave impedance
See also
List of materials properties
Physical quantity
Physical test
Test method
References
Bibliography
External links
Physical and Chemical Property Data Sources – a list of references which cover several chemical and physical properties of various materials
Physical phenomena | 0.777026 | 0.996401 | 0.77423 |
Polymerization | In polymer chemistry, polymerization (American English), or polymerisation (British English), is a process of reacting monomer molecules together in a chemical reaction to form polymer chains or three-dimensional networks. There are many forms of polymerization and different systems exist to categorize them.
In chemical compounds, polymerization can occur via a variety of reaction mechanisms that vary in complexity due to the functional groups present in the reactants and their inherent steric effects. In more straightforward polymerizations, alkenes form polymers through relatively simple radical reactions; in contrast, reactions involving substitution at a carbonyl group require more complex synthesis due to the way in which reactants polymerize.
As alkenes can polymerize in somewhat straightforward radical reactions, they form useful compounds such as polyethylene and polyvinyl chloride (PVC), which are produced in high tonnages each year due to their usefulness in manufacturing processes of commercial products, such as piping, insulation and packaging. In general, polymers such as PVC are referred to as "homopolymers", as they consist of repeated long chains or structures of the same monomer unit, whereas polymers that consist of more than one monomer unit are referred to as copolymers (or co-polymers).
Other monomer units, such as formaldehyde hydrates or simple aldehydes, are able to polymerize themselves at quite low temperatures (ca. −80 °C) to form trimers; molecules consisting of 3 monomer units, which can cyclize to form ring cyclic structures, or undergo further reactions to form tetramers, or 4 monomer-unit compounds. Such small polymers are referred to as oligomers. Generally, because formaldehyde is an exceptionally reactive electrophile it allows nucleophilic addition of hemiacetal intermediates, which are in general short-lived and relatively unstable "mid-stage" compounds that react with other non-polar molecules present to form more stable polymeric compounds.
Polymerization that is not sufficiently moderated and proceeds at a fast rate can be very hazardous. This phenomenon is known as autoacceleration, and can cause fires and explosions.
Step-growth vs. chain-growth polymerization
Step-growth and chain-growth are the main classes of polymerization reaction mechanisms. The former is often easier to implement but requires precise control of stoichiometry. The latter more reliably affords high molecular-weight polymers, but only applies to certain monomers.
Step-growth
In step-growth (or step) polymerization, pairs of reactants, of any lengths, combine at each step to form a longer polymer molecule. The average molar mass increases slowly. Long chains form only late in the reaction.
Step-growth polymers are formed by independent reaction steps between functional groups of monomer units, usually containing heteroatoms such as nitrogen or oxygen. Most step-growth polymers are also classified as condensation polymers, since a small molecule such as water is lost when the polymer chain is lengthened. For example, polyester chains grow by reaction of alcohol and carboxylic acid groups to form ester links with loss of water. However, there are exceptions; for example polyurethanes are step-growth polymers formed from isocyanate and alcohol bifunctional monomers) without loss of water or other volatile molecules, and are classified as addition polymers rather than condensation polymers.
Step-growth polymers increase in molecular weight at a very slow rate at lower conversions and reach moderately high molecular weights only at very high conversion (i.e., >95%). Solid state polymerization to afford polyamides (e.g., nylons) is an example of step-growth polymerization.
Chain-growth
In chain-growth (or chain) polymerization, the only chain-extension reaction step is the addition of a monomer to a growing chain with an active center such as a free radical, cation, or anion. Once the growth of a chain is initiated by formation of an active center, chain propagation is usually rapid by addition of a sequence of monomers. Long chains are formed from the beginning of the reaction.
Chain-growth polymerization (or addition polymerization) involves the linking together of unsaturated monomers, especially containing carbon-carbon double bonds. The pi-bond is lost by formation of a new sigma bond. Chain-growth polymerization is involved in the manufacture of polymers such as polyethylene, polypropylene, polyvinyl chloride (PVC), and acrylate. In these cases, the alkenes RCH=CH2 are converted to high molecular weight alkanes (-RCHCH2-)n (R = H, CH3, Cl, CO2CH3).
Other forms of chain growth polymerization include cationic addition polymerization and anionic addition polymerization. A special case of chain-growth polymerization leads to living polymerization. Ziegler–Natta polymerization allows considerable control of polymer branching.
Diverse methods are employed to manipulate the initiation, propagation, and termination rates during chain polymerization. A related issue is temperature control, also called heat management, during these reactions, which are often highly exothermic. For example, for the polymerization of ethylene, 93.6 kJ of energy are released per mole of monomer.
The manner in which polymerization is conducted is a highly evolved technology. Methods include emulsion polymerization, solution polymerization, suspension polymerization, and precipitation polymerization. Although the polymer dispersity and molecular weight may be improved, these methods may introduce additional processing requirements to isolate the product from a solvent.
Photopolymerization
Most photopolymerization reactions are chain-growth polymerizations which are initiated by the absorption of visible or ultraviolet light. Photopolymerization can also be a step-growth polymerization. The light may be absorbed either directly by the reactant monomer (direct photopolymerization), or else by a photosensitizer which absorbs the light and then transfers energy to the monomer. In general, only the initiation step differs from that of the ordinary thermal polymerization of the same monomer; subsequent propagation, termination, and chain-transfer steps are unchanged.
In step-growth photopolymerization, absorption of light triggers an addition (or condensation) reaction between two comonomers that do not react without light. A propagation cycle is not initiated because each growth step requires the assistance of light.
Photopolymerization can be used as a photographic or printing process because polymerization only occurs in regions which have been exposed to light. Unreacted monomer can be removed from unexposed regions, leaving a relief polymeric image. Several forms of 3D printing—including layer-by-layer stereolithography and two-photon absorption 3D photopolymerization—use photopolymerization.
Multiphoton polymerization using single pulses have also been demonstrated for fabrication of complex structures using a digital micromirror device.
See also
Cross-link
Enzymatic polymerization
In situ polymerization
Metallocene
Plasma polymerization
Polymer characterization
Polymer physics
Reversible addition−fragmentation chain-transfer polymerization
Ring-opening polymerization
Sequence-controlled polymers
Sol-gel
References | 0.778437 | 0.994565 | 0.774206 |
Demethylation | Demethylation is the chemical process resulting in the removal of a methyl group (CH3) from a molecule. A common way of demethylation is the replacement of a methyl group by a hydrogen atom, resulting in a net loss of one carbon and two hydrogen atoms.
The counterpart of demethylation is methylation.
In biochemistry
Demethylation is relevant to epigenetics. Demethylation of DNA is catalyzed by demethylases. These enzymes oxidize N-methyl groups, which occur in histones, in lysine derivatives, and in some forms of DNA.
R2N-CH3 + O → R2N-H + CH2O
One family of such oxidative enzymes is the cytochrome P450. Alpha-ketoglutarate-dependent hydroxylases are also active for demethylation of DNA, operating by a similar stoichiometry. These reactions, which proceed via hydroxylation, exploit the slightly weakened C-H bonds of methylamines and methyl ethers.
Demethylation of some sterols are steps in the biosynthesis of testosterone and cholesterol. Methyl groups are lost as formate.
Biomass processing
Methoxy groups heavily decorate the biopolymer lignin. Much interest has been shown in converting this abundant form of biomass into useful chemicals (aside from fuel). One step in such processing is demethylation.
The demethylation of vanillin, a derivative of lignin, requires and strong base. Pulp and paper industry]] digests lignin using aqueous sodium sulfide, which partially depolymerizes the lignin. Delignification is accompanied by extensive O-demethylation, yielding methanethiol, which is emitted by paper mills.
In organic chemistry
Demethylation often refers to cleavage of ethers, especially aryl ethers.
Historically, aryl methyl ethers, including natural products such as codeine (O-methylmorphine), have been demethylated by heating the substance in molten pyridine hydrochloride (melting point ) at , sometimes with excess hydrogen chloride, in a process known as the Zeisel–Prey ether cleavage. Quantitative analysis for aromatic methyl ethers can be performed by argentometric determination of the N-methylpyridinium chloride formed. The mechanism of this reaction starts with proton transfer from pyridinium ion to the aryl methyl ether, a highly unfavorable step (K < 10−11) that accounts for the harsh conditions required, given the much weaker acidity of pyridinium (pKa = 5.2) compared to the protonated aryl methyl ether (an arylmethyloxonium ion, pKa = –6.7 for aryl = Ph). This is followed by SN2 attack of the arylmethyloxonium ion at the methyl group by either pyridine or chloride ion (depending on the substrate) to give the free phenol and, ultimately, N-methylpyridinium chloride, either directly or by subsequent methyl transfer from methyl chloride to pyridine.
Another classical (but, again, harsh) method for the removal of the methyl group of an aryl methyl ether is to heat the ether in a solution of hydrogen bromide or hydrogen iodide sometimes also with acetic acid. The cleavage of ethers by hydrobromic or hydroiodic acid proceeds by protonation of the ether, followed by displacement by bromide or iodide. A slightly milder set of conditions uses cyclohexyl iodide (CyI, 10.0 equiv) in N,N-dimethylformamide to generate a small amount of hydrogen iodide in situ.
Boron tribromide, which can be used at room temperature or below, is a more specialized reagent for the demethylation of aryl methyl ethers. The mechanism of ether dealkylation proceeds via the initial reversible formation of a Lewis acid-base adduct between the strongly Lewis acidic BBr3 and the Lewis basic ether. This Lewis adduct can reversibly dissociate to give a dibromoboryl oxonium cation and Br–. Rupture of the ether linkage occurs through the subsequent nucleophilic attack on the oxonium species by Br– to yield an aryloxydibromoborane and methyl bromide. Upon completion of the reaction, the phenol is liberated along with boric acid (H3BO3) and hydrobromic acid (aq. HBr) upon hydrolysis of the dibromoborane derivative during aqueous workup.
Stronger nucleophiles such as diorganophosphides (LiPPh2) also cleave aryl ethers, sometimes under mild conditions. Other strong nucleophiles that have been employed include thiolate salts like EtSNa.
Aromatic methyl ethers, particularly those with an adjacent carbonyl group, can be regioselectively demethylated using magnesium iodide etherate. An example of this being used is in the synthesis of the natural product Calphostin A, as seen below.
Methyl esters also are susceptible to demethylation, which is usually achieved by saponification. Highly specialized demethylations are abundant, such as the Krapcho decarboxylation:
A mixture of anethole, KOH, and alcohol was heated in an autoclave. Although the product of this reaction was the expected anol, a highly reactive dimerization product in the mother liquors called dianol was also discovered by Charles Dodds.
N-demethylation
N-demethylation of 3° amines is by the von Braun reaction, which uses BrCN as the reagent to give the corresponding nor- derivatives. A modern variation of the von Braun reaction was developed, where BrCN was superseded by ethyl chloroformate. The preparation of Paxil from arecoline is an application of this reaction, as well as the synthesis of GSK-372,475, for example.
The N-demethylation of imipramine gives desipramine.
See also
Methylation, the addition of a methyl group to a substrate
References
Gene expression
Organic reactions | 0.788304 | 0.982107 | 0.774199 |
Biomimetics | Biomimetics or biomimicry is the emulation of the models, systems, and elements of nature for the purpose of solving complex human problems. The terms "biomimetics" and "biomimicry" are derived from (bios), life, and μίμησις (mīmēsis), imitation, from μιμεῖσθαι (mīmeisthai), to imitate, from μῖμος (mimos), actor. A closely related field is bionics.
Nature has gone through evolution over the 3.8 billion years since life is estimated to have appeared on the Earth. It has evolved species with high performance using commonly found materials. Surfaces of solids interact with other surfaces and the environment and derive the properties of materials. Biological materials are highly organized from the molecular to the nano-, micro-, and macroscales, often in a hierarchical manner with intricate nanoarchitecture that ultimately makes up a myriad of different functional elements. Properties of materials and surfaces result from a complex interplay between surface structure and morphology and physical and chemical properties. Many materials, surfaces, and objects in general provide multifunctionality.
Various materials, structures, and devices have been fabricated for commercial interest by engineers, material scientists, chemists, and biologists, and for beauty, structure, and design by artists and architects. Nature has solved engineering problems such as self-healing abilities, environmental exposure tolerance and resistance, hydrophobicity, self-assembly, and harnessing solar energy. Economic impact of bioinspired materials and surfaces is significant, on the order of several hundred billion dollars per year worldwide.
History
One of the early examples of biomimicry was the study of birds to enable human flight. Although never successful in creating a "flying machine", Leonardo da Vinci (1452–1519) was a keen observer of the anatomy and flight of birds, and made numerous notes and sketches on his observations as well as sketches of "flying machines". The Wright Brothers, who succeeded in flying the first heavier-than-air aircraft in 1903, allegedly derived inspiration from observations of pigeons in flight.
During the 1950s the American biophysicist and polymath Otto Schmitt developed the concept of "biomimetics". During his doctoral research he developed the Schmitt trigger by studying the nerves in squid, attempting to engineer a device that replicated the biological system of nerve propagation. He continued to focus on devices that mimic natural systems and by 1957 he had perceived a converse to the standard view of biophysics at that time, a view he would come to call biomimetics.
In 1960 Jack E. Steele coined a similar term, bionics, at Wright-Patterson Air Force Base in Dayton, Ohio, where Otto Schmitt also worked. Steele defined bionics as "the science of systems which have some function copied from nature, or which represent characteristics of natural systems or their analogues". During a later meeting in 1963 Schmitt stated,
In 1969, Schmitt used the term "biomimetic" in the title one of his papers, and by 1974 it had found its way into Webster's Dictionary. Bionics entered the same dictionary earlier in 1960 as "a science concerned with the application of data about the functioning of biological systems to the solution of engineering problems". Bionic took on a different connotation when Martin Caidin referenced Jack Steele and his work in the novel Cyborg which later resulted in the 1974 television series The Six Million Dollar Man and its spin-offs. The term bionic then became associated with "the use of electronically operated artificial body parts" and "having ordinary human powers increased by or as if by the aid of such devices". Because the term bionic took on the implication of supernatural strength, the scientific community in English speaking countries largely abandoned it.
The term biomimicry appeared as early as 1982. Biomimicry was popularized by scientist and author Janine Benyus in her 1997 book Biomimicry: Innovation Inspired by Nature. Biomimicry is defined in the book as a "new science that studies nature's models and then imitates or takes inspiration from these designs and processes to solve human problems". Benyus suggests looking to Nature as a "Model, Measure, and Mentor" and emphasizes sustainability as an objective of biomimicry.
One of the latest examples of biomimicry has been created by Johannes-Paul Fladerer and Ernst Kurzmann by the description of "managemANT". This term (a combination of the words "management" and "ant"), describes the usage of behavioural strategies of ants in economic and management strategies. The potential long-term impacts of biomimicry were quantified in a 2013 Fermanian Business & Economic Institute Report commissioned by the San Diego Zoo. The findings demonstrated the potential economic and environmental benefits of biomimicry, which can be further seen in Johannes-Paul Fladerer and Ernst Kurzmann's "managemANT" approach. This approach utilizes the behavioral strategies of ants in economic and management strategies.
Bio-inspired technologies
Biomimetics could in principle be applied in many fields. Because of the diversity and complexity of biological systems, the number of features that might be imitated is large. Biomimetic applications are at various stages of development from technologies that might become commercially usable to prototypes. Murray's law, which in conventional form determined the optimum diameter of blood vessels, has been re-derived to provide simple equations for the pipe or tube diameter which gives a minimum mass engineering system.
Locomotion
Aircraft wing design and flight techniques are being inspired by birds and bats. The aerodynamics of streamlined design of improved Japanese high speed train Shinkansen 500 Series were modelled after the beak of Kingfisher bird.
Biorobots based on the physiology and methods of locomotion of animals include BionicKangaroo which moves like a kangaroo, saving energy from one jump and transferring it to its next jump; Kamigami Robots, a children's toy, mimic cockroach locomotion to run quickly and efficiently over indoor and outdoor surfaces, and Pleobot, a shrimp-inspired robot to study metachronal swimming and the ecological impacts of this propulsive gait on the environment.
Biomimetic flying robots (BFRs)
BFRs take inspiration from flying mammals, birds, or insects. BFRs can have flapping wings, which generate the lift and thrust, or they can be propeller actuated. BFRs with flapping wings have increased stroke efficiencies, increased maneuverability, and reduced energy consumption in comparison to propeller actuated BFRs. Mammal and bird inspired BFRs share similar flight characteristics and design considerations. For instance, both mammal and bird inspired BFRs minimize edge fluttering and pressure-induced wingtip curl by increasing the rigidity of the wing edge and wingtips. Mammal and insect inspired BFRs can be impact resistant, making them useful in cluttered environments.
Mammal inspired BFRs typically take inspiration from bats, but the flying squirrel has also inspired a prototype. Examples of bat inspired BFRs include Bat Bot and the DALER. Mammal inspired BFRs can be designed to be multi-modal; therefore, they're capable of both flight and terrestrial movement. To reduce the impact of landing, shock absorbers can be implemented along the wings. Alternatively, the BFR can pitch up and increase the amount of drag it experiences. By increasing the drag force, the BFR will decelerate and minimize the impact upon grounding. Different land gait patterns can also be implemented.
Bird inspired BFRs can take inspiration from raptors, gulls, and everything in-between. Bird inspired BFRs can be feathered to increase the angle of attack range over which the prototype can operate before stalling. The wings of bird inspired BFRs allow for in-plane deformation, and the in-plane wing deformation can be adjusted to maximize flight efficiency depending on the flight gait. An example of a raptor inspired BFR is the prototype by Savastano et al. The prototype has fully deformable flapping wings and is capable of carrying a payload of up to 0.8 kg while performing a parabolic climb, steep descent, and rapid recovery. The gull inspired prototype by Grant et al. accurately mimics the elbow and wrist rotation of gulls, and they find that lift generation is maximized when the elbow and wrist deformations are opposite but equal.
Insect inspired BFRs typically take inspiration from beetles or dragonflies. An example of a beetle inspired BFR is the prototype by Phan and Park, and a dragonfly inspired BFR is the prototype by Hu et al. The flapping frequency of insect inspired BFRs are much higher than those of other BFRs; this is because of the aerodynamics of insect flight. Insect inspired BFRs are much smaller than those inspired by mammals or birds, so they are more suitable for dense environments. The prototype by Phan and Park took inspiration from the rhinoceros beetle, so it can successfully continue flight even after a collision by deforming its hindwings.
Biomimetic architecture
Living beings have adapted to a constantly changing environment during evolution through mutation, recombination, and selection. The core idea of the biomimetic philosophy is that nature's inhabitants including animals, plants, and microbes have the most experience in solving problems and have already found the most appropriate ways to last on planet Earth. Similarly, biomimetic architecture seeks solutions for building sustainability present in nature. While nature serves as a model, there are few examples of biomimetic architecture that aim to be nature positive.
The 21st century has seen a ubiquitous waste of energy due to inefficient building designs, in addition to the over-utilization of energy during the operational phase of its life cycle. In parallel, recent advancements in fabrication techniques, computational imaging, and simulation tools have opened up new possibilities to mimic nature across different architectural scales. As a result, there has been a rapid growth in devising innovative design approaches and solutions to counter energy problems. Biomimetic architecture is one of these multi-disciplinary approaches to sustainable design that follows a set of principles rather than stylistic codes, going beyond using nature as inspiration for the aesthetic components of built form but instead seeking to use nature to solve problems of the building's functioning and saving energy.
Characteristics
The term biomimetic architecture refers to the study and application of construction principles which are found in natural environments and species, and are translated into the design of sustainable solutions for architecture. Biomimetic architecture uses nature as a model, measure and mentor for providing architectural solutions across scales, which are inspired by natural organisms that have solved similar problems in nature. Using nature as a measure refers to using an ecological standard of measuring sustainability, and efficiency of man-made innovations, while the term mentor refers to learning from natural principles and using biology as an inspirational source.
Biomorphic architecture, also referred to as bio-decoration, on the other hand, refers to the use of formal and geometric elements found in nature, as a source of inspiration for aesthetic properties in designed architecture, and may not necessarily have non-physical, or economic functions. A historic example of biomorphic architecture dates back to Egyptian, Greek and Roman cultures, using tree and plant forms in the ornamentation of structural columns.
Procedures
Within biomimetic architecture, two basic procedures can be identified, namely, the bottom-up approach (biology push) and top-down approach (technology pull). The boundary between the two approaches is blurry with the possibility of transition between the two, depending on each individual case. Biomimetic architecture is typically carried out in interdisciplinary teams in which biologists and other natural scientists work in collaboration with engineers, material scientists, architects, designers, mathematicians and computer scientists.
In the bottom-up approach, the starting point is a new result from basic biological research promising for biomimetic implementation. For example, developing a biomimetic material system after the quantitative analysis of the mechanical, physical, and chemical properties of a biological system.
In the top-down approach, biomimetic innovations are sought for already existing developments that have been successfully established on the market. The cooperation focuses on the improvement or further development of an existing product.
Examples
Researchers studied the termite's ability to maintain virtually constant temperature and humidity in their termite mounds in Africa despite outside temperatures that vary from . Researchers initially scanned a termite mound and created 3-D images of the mound structure, which revealed construction that could influence human building design. The Eastgate Centre, a mid-rise office complex in Harare, Zimbabwe, stays cool via a passive cooling architecture that uses only 10% of the energy of a conventional building of the same size.
Researchers in the Sapienza University of Rome were inspired by the natural ventilation in termite mounds and designed a double façade that significantly cuts down over lit areas in a building. Scientists have imitated the porous nature of mound walls by designing a facade with double panels that was able to reduce heat gained by radiation and increase heat loss by convection in cavity between the two panels. The overall cooling load on the building's energy consumption was reduced by 15%.
A similar inspiration was drawn from the porous walls of termite mounds to design a naturally ventilated façade with a small ventilation gap. This design of façade is able to induce air flow due to the Venturi effect and continuously circulates rising air in the ventilation slot. Significant transfer of heat between the building's external wall surface and the air flowing over it was observed. The design is coupled with greening of the façade. Green wall facilitates additional natural cooling via evaporation, respiration and transpiration in plants. The damp plant substrate further support the cooling effect.
Scientists in Shanghai University were able to replicate the complex microstructure of clay-made conduit network in the mound to mimic the excellent humidity control in mounds. They proposed a porous humidity control material (HCM) using sepiolite and calcium chloride with water vapor adsorption-desorption content at 550 grams per meter squared. Calcium chloride is a desiccant and improves the water vapor adsorption-desorption property of the Bio-HCM. The proposed bio-HCM has a regime of interfiber mesopores which acts as a mini reservoir. The flexural strength of the proposed material was estimated to be 10.3 MPa using computational simulations.
In structural engineering, the Swiss Federal Institute of Technology (EPFL) has incorporated biomimetic characteristics in an adaptive deployable "tensegrity" bridge. The bridge can carry out self-diagnosis and self-repair. The arrangement of leaves on a plant has been adapted for better solar power collection.
Analysis of the elastic deformation happening when a pollinator lands on the sheath-like perch part of the flower Strelitzia reginae (known as bird-of-paradise flower) has inspired architects and scientists from the University of Freiburg and University of Stuttgart to create hingeless shading systems that can react to their environment. These bio-inspired products are sold under the name Flectofin.
Other hingeless bioinspired systems include Flectofold. Flectofold has been inspired from the trapping system developed by the carnivorous plant Aldrovanda vesiculosa.
Structural materials
There is a great need for new structural materials that are light weight but offer exceptional combinations of stiffness, strength, and toughness.
Such materials would need to be manufactured into bulk materials with complex shapes at high volume and low cost and would serve a variety of fields such as construction, transportation, energy storage and conversion. In a classic design problem, strength and toughness are more likely to be mutually exclusive, i.e., strong materials are brittle and tough materials are weak. However, natural materials with complex and hierarchical material gradients that span from nano- to macro-scales are both strong and tough. Generally, most natural materials utilize limited chemical components but complex material architectures that give rise to exceptional mechanical properties. Understanding the highly diverse and multi functional biological materials and discovering approaches to replicate such structures will lead to advanced and more efficient technologies. Bone, nacre (abalone shell), teeth, the dactyl clubs of stomatopod shrimps and bamboo are great examples of damage tolerant materials. The exceptional resistance to fracture of bone is due to complex deformation and toughening mechanisms that operate at spanning different size scales — nanoscale structure of protein molecules to macroscopic physiological scale. Nacre exhibits similar mechanical properties however with rather simpler structure. Nacre shows a brick and mortar like structure with thick mineral layer (0.2–0.9 μm) of closely packed aragonite structures and thin organic matrix (~20 nm). While thin films and micrometer sized samples that mimic these structures are already produced, successful production of bulk biomimetic structural materials is yet to be realized. However, numerous processing techniques have been proposed for producing nacre like materials. Pavement cells, epidermal cells on the surface of plant leaves and petals, often form wavy interlocking patterns resembling jigsaw puzzle pieces and are shown to enhance the fracture toughness of leaves, key to plant survival. Their pattern, replicated in laser-engraved Poly(methyl methacrylate) samples, was also demonstrated to lead to increased fracture toughness. It is suggested that the arrangement and patterning of cells play a role in managing crack propagation in tissues.
Biomorphic mineralization is a technique that produces materials with morphologies and structures resembling those of natural living organisms by using bio-structures as templates for mineralization. Compared to other methods of material production, biomorphic mineralization is facile, environmentally benign and economic.
Freeze casting (ice templating), an inexpensive method to mimic natural layered structures, was employed by researchers at Lawrence Berkeley National Laboratory to create alumina-Al-Si and IT HAP-epoxy layered composites that match the mechanical properties of bone with an equivalent mineral/organic content. Various further studies also employed similar methods to produce high strength and high toughness composites involving a variety of constituent phases.
Recent studies demonstrated production of cohesive and self supporting macroscopic tissue constructs that mimic living tissues by printing tens of thousands of heterologous picoliter droplets in software-defined, 3D millimeter-scale geometries. Efforts are also taken up to mimic the design of nacre in artificial composite materials using fused deposition modelling and the helicoidal structures of stomatopod clubs in the fabrication of high performance carbon fiber-epoxy composites.
Various established and novel additive manufacturing technologies like PolyJet printing, direct ink writing, 3D magnetic printing, multi-material magnetically assisted 3D printing and magnetically assisted slip casting have also been utilized to mimic the complex micro-scale architectures of natural materials and provide huge scope for future research.
Spider silk is tougher than Kevlar used in bulletproof vests. Engineers could in principle use such a material, if it could be reengineered to have a long enough life, for parachute lines, suspension bridge cables, artificial ligaments for medicine, and other purposes. The self-sharpening teeth of many animals have been copied to make better cutting tools.
New ceramics that exhibit giant electret hysteresis have also been realized.
Neuronal computers
Neuromorphic computers and sensors are electrical devices that copy the structure and function of biological neurons in order to compute. One example of this is the event camera in which only the
pixels that receive a new signal update to a new state. All other pixels do not update until a signal is received.
Self healing-materials
In some biological systems, self-healing occurs via chemical releases at the site of fracture, which initiate a systemic response to transport repairing agents to the fracture site. This promotes autonomic healing. To demonstrate the use of micro-vascular networks for autonomic healing, researchers developed a microvascular coating–substrate architecture that mimics human skin. Bio-inspired self-healing structural color hydrogels that maintain the stability of an inverse opal structure and its resultant structural colors were developed. A self-repairing membrane inspired by rapid self-sealing processes in plants was developed for inflatable lightweight structures such as rubber boats or Tensairity constructions. The researchers applied a thin soft cellular polyurethane foam coating on the inside of a fabric substrate, which closes the crack if the membrane is punctured with a spike. Self-healing materials, polymers and composite materials capable of mending cracks have been produced based on biological materials.
The self-healing properties may also be achieved by the breaking and reforming of hydrogen bonds upon cyclical stress of the material.
Surfaces
Surfaces that recreate the properties of shark skin are intended to enable more efficient movement through water. Efforts have been made to produce fabric that emulates shark skin.
Surface tension biomimetics are being researched for technologies such as hydrophobic or hydrophilic coatings and microactuators.
Adhesion
Wet adhesion
Some amphibians, such as tree and torrent frogs and arboreal salamanders, are able to attach to and move over wet or even flooded environments without falling. This kind of organisms have toe pads which are permanently wetted by mucus secreted from glands that open into the channels between epidermal cells. They attach to mating surfaces by wet adhesion and they are capable of climbing on wet rocks even when water is flowing over the surface. Tire treads have also been inspired by the toe pads of tree frogs. 3D printed hierarchical surface models, inspired from tree and torrent frogs toe pad design, have been observed to produce better wet traction than conventional tire design.
Marine mussels can stick easily and efficiently to surfaces underwater under the harsh conditions of the ocean. Mussels use strong filaments to adhere to rocks in the inter-tidal zones of wave-swept beaches, preventing them from being swept away in strong sea currents. Mussel foot proteins attach the filaments to rocks, boats and practically any surface in nature including other mussels. These proteins contain a mix of amino acid residues which has been adapted specifically for adhesive purposes. Researchers from the University of California Santa Barbara borrowed and simplified chemistries that the mussel foot uses to overcome this engineering challenge of wet adhesion to create copolyampholytes, and one-component adhesive systems with potential for employment in nanofabrication protocols. Other research has proposed adhesive glue from mussels.
Dry adhesion
Leg attachment pads of several animals, including many insects (e.g., beetles and flies), spiders and lizards (e.g., geckos), are capable of attaching to a variety of surfaces and are used for locomotion, even on vertical walls or across ceilings. Attachment systems in these organisms have similar structures at their terminal elements of contact, known as setae. Such biological examples have offered inspiration in order to produce climbing robots, boots and tape. Synthetic setae have also been developed for the production of dry adhesives.
Liquid repellency
Superliquiphobicity refers to a remarkable surface property where a solid surface exhibits an extreme aversion to liquids, causing droplets to bead up and roll off almost instantaneously upon contact. This behavior arises from intricate surface textures and interactions at the nanoscale, effectively preventing liquids from wetting or adhering to the surface. The term "superliquiphobic" is derived from "superhydrophobic," which describes surfaces highly resistant to water. Superliquiphobic surfaces go beyond water repellency and display repellent characteristics towards a wide range of liquids, including those with very low surface tension or containing surfactants.
Superliquiphobicity, a remarkable phenomenon, emerges when a solid surface possesses minute roughness, forming interfaces with droplets through wetting while altering contact angles. This behavior hinges on the roughness factor (Rf), defining the ratio of solid-liquid area to its projection, influencing contact angles. On rough surfaces, non-wetting liquids give rise to composite solid-liquid-air interfaces, their contact angles determined by the distribution of wet and air-pocket areas. The achievement of superliquiphobicity involves increasing the fractional flat geometrical area (fLA) and Rf, leading to surfaces that actively repel liquids.
The inspiration for crafting such surfaces draws from nature's ingenuity, prominently illustrated by the renowned "lotus effect". Leaves of water-repellent plants, like the lotus, exhibit inherent hierarchical structures featuring nanoscale wax-coated formations. These structures lead to superhydrophobicity, where water droplets perch on trapped air bubbles, resulting in high contact angles and minimal contact angle hysteresis. This natural example guides the development of superliquiphobic surfaces, capitalizing on re-entrant geometries that can repel low surface tension liquids and achieve near-zero contact angles.
Creating superliquiphobic surfaces involves pairing re-entrant geometries with low surface energy materials, such as fluorinated substances. These geometries include overhangs that widen beneath the surface, enabling repellency even for minimal contact angles. Researchers have successfully fabricated various re-entrant geometries, offering a pathway for practical applications in diverse fields. These surfaces find utility in self-cleaning, anti-icing, anti-fogging, antifouling, and more, presenting innovative solutions to challenges in biomedicine, desalination, and energy conversion.
In essence, superliquiphobicity, inspired by natural models like the lotus leaf, capitalizes on re-entrant geometries and surface properties to create interfaces that actively repel liquids. These surfaces hold immense promise across a range of applications, promising enhanced functionality and performance in various technological and industrial contexts.
Optics
Biomimetic materials are gaining increasing attention in the field of optics and photonics. There are still little known bioinspired or biomimetic products involving the photonic properties of plants or animals. However, understanding how nature designed such optical materials from biological resources is a current field of research.
Inspiration from fruits and plants
One source of biomimetic inspiration is from plants. Plants have proven to be concept generations for the following functions; re(action)-coupling, self (adaptability), self-repair, and energy-autonomy. As plants do not have a centralized decision making unit (i.e. a brain), most plants have a decentralized autonomous system in various organs and tissues of the plant. Therefore, they react to multiple stimulus such as light, heat, and humidity.
One example is the carnivorous plant species Dionaea muscipula (Venus flytrap). For the last 25 years, there has been research focus on the motion principles of the plant to develop AVFT (artificial Venus flytrap robots). Through the movement during prey capture, the plant inspired soft robotic motion systems. The fast snap buckling (within 100–300 ms) of the trap closure movement is initiated when prey triggers the hairs of the plant within a certain time (twice within 20 s). AVFT systems exist, in which the trap closure movements are actuated via magnetism, electricity, pressurized air, and temperature changes.
Another example of mimicking plants, is the Pollia condensata, also known as the marble berry. The chiral self-assembly of cellulose inspired by the Pollia condensata berry has been exploited to make optically active films. Such films are made from cellulose which is a biodegradable and biobased resource obtained from wood or cotton. The structural colours can potentially be everlasting and have more vibrant colour than the ones obtained from chemical absorption of light. Pollia condensata is not the only fruit showing a structural coloured skin; iridescence is also found in berries of other species such as Margaritaria nobilis. These fruits show iridescent colors in the blue-green region of the visible spectrum which gives the fruit a strong metallic and shiny visual appearance. The structural colours come from the organisation of cellulose chains in the fruit's epicarp, a part of the fruit skin. Each cell of the epicarp is made of a multilayered envelope that behaves like a Bragg reflector. However, the light which is reflected from the skin of these fruits is not polarised unlike the one arising from man-made replicates obtained from the self-assembly of cellulose nanocrystals into helicoids, which only reflect left-handed circularly polarised light.
The fruit of Elaeocarpus angustifolius also show structural colour that come arises from the presence of specialised cells called iridosomes which have layered structures. Similar iridosomes have also been found in Delarbrea michieana fruits.
In plants, multi layer structures can be found either at the surface of the leaves (on top of the epidermis), such as in Selaginella willdenowii or within specialized intra-cellular organelles, the so-called iridoplasts, which are located inside the cells of the upper epidermis. For instance, the rain forest plants Begonia pavonina have iridoplasts located inside the epidermal cells.
Structural colours have also been found in several algae, such as in the red alga Chondrus crispus (Irish Moss).
Inspiration from animals
Structural coloration produces the rainbow colours of soap bubbles, butterfly wings and many beetle scales. Phase-separation has been used to fabricate ultra-white scattering membranes from polymethylmethacrylate, mimicking the beetle Cyphochilus. LED lights can be designed to mimic the patterns of scales on fireflies' abdomens, improving their efficiency.
Morpho butterfly wings are structurally coloured to produce a vibrant blue that does not vary with angle. This effect can be mimicked by a variety of technologies. Lotus Cars claim to have developed a paint that mimics the Morpho butterfly's structural blue colour. In 2007, Qualcomm commercialised an interferometric modulator display technology, "Mirasol", using Morpho-like optical interference. In 2010, the dressmaker Donna Sgro made a dress from Teijin Fibers' Morphotex, an undyed fabric woven from structurally coloured fibres, mimicking the microstructure of Morpho butterfly wing scales.
Canon Inc.'s SubWavelength structure Coating uses wedge-shaped structures the size of the wavelength of visible light. The wedge-shaped structures cause a continuously changing refractive index as light travels through the coating, significantly reducing lens flare. This imitates the structure of a moth's eye. Notable figures such as the Wright Brothers and Leonardo da Vinci attempted to replicate the flight observed in birds. In an effort to reduce aircraft noise researchers have looked to the leading edge of owl feathers, which have an array of small finlets or rachis adapted to disperse aerodynamic pressure and provide nearly silent flight to the bird.
Agricultural systems
Holistic planned grazing, using fencing and/or herders, seeks to restore grasslands by carefully planning movements of large herds of livestock to mimic the vast herds found in nature. The natural system being mimicked and used as a template is grazing animals concentrated by pack predators that must move on after eating, trampling, and manuring an area, and returning only after it has fully recovered. Its founder Allan Savory and some others have claimed potential in building soil, increasing biodiversity, and reversing desertification. However, many researchers have disputed Savory's claim. Studies have often found that the method increases desertification instead of reducing it.
Other uses
Some air conditioning systems use biomimicry in their fans to increase airflow while reducing power consumption.
Technologists like Jas Johl have speculated that the functionality of vacuole cells could be used to design highly adaptable security systems. "The functionality of a vacuole, a biological structure that guards and promotes growth, illuminates the value of adaptability as a guiding principle for security." The functions and significance of vacuoles are fractal in nature, the organelle has no basic shape or size; its structure varies according to the requirements of the cell. Vacuoles not only isolate threats, contain what's necessary, export waste, maintain pressure—they also help the cell scale and grow. Johl argues these functions are necessary for any security system design. The 500 Series Shinkansen used biomimicry to reduce energy consumption and noise levels while increasing passenger comfort. With reference to space travel, NASA and other firms have sought to develop swarm-type space drones inspired by bee behavioural patterns, and oxtapod terrestrial drones designed with reference to desert spiders.
Other technologies
Protein folding has been used to control material formation for self-assembled functional nanostructures. Polar bear fur has inspired the design of thermal collectors and clothing. The light refractive properties of the moth's eye has been studied to reduce the reflectivity of solar panels.
The Bombardier beetle's powerful repellent spray inspired a Swedish company to develop a "micro mist" spray technology, which is claimed to have a low carbon impact (compared to aerosol sprays). The beetle mixes chemicals and releases its spray via a steerable nozzle at the end of its abdomen, stinging and confusing the victim.
Most viruses have an outer capsule 20 to 300 nm in diameter. Virus capsules are remarkably robust and capable of withstanding temperatures as high as 60 °C; they are stable across the pH range 2–10. Viral capsules can be used to create nano device components such as nanowires, nanotubes, and quantum dots. Tubular virus particles such as the tobacco mosaic virus (TMV) can be used as templates to create nanofibers and nanotubes, since both the inner and outer layers of the virus are charged surfaces which can induce nucleation of crystal growth. This was demonstrated through the production of platinum and gold nanotubes using TMV as a template. Mineralized virus particles have been shown to withstand various pH values by mineralizing the viruses with different materials such as silicon, PbS, and CdS and could therefore serve as a useful carriers of material. A spherical plant virus called cowpea chlorotic mottle virus (CCMV) has interesting expanding properties when exposed to environments of pH higher than 6.5. Above this pH, 60 independent pores with diameters about 2 nm begin to exchange substance with the environment. The structural transition of the viral capsid can be utilized in Biomorphic mineralization for selective uptake and deposition of minerals by controlling the solution pH. Possible applications include using the viral cage to produce uniformly shaped and sized quantum dot semiconductor nanoparticles through a series of pH washes. This is an alternative to the apoferritin cage technique currently used to synthesize uniform CdSe nanoparticles. Such materials could also be used for targeted drug delivery since particles release contents upon exposure to specific pH levels.
See also
Artificial photosynthesis
Artificial enzyme
Bio-inspired computing
Bioinspiration & Biomimetics
Biomimetic synthesis
Carbon sequestration
Reverse engineering
Synthetic biology
References
Further reading
Benyus, J. M. (2001). Along Came a Spider. Sierra, 86(4), 46–47.
Hargroves, K. D. & Smith, M. H. (2006). Innovation inspired by nature Biomimicry. Ecos, (129), 27–28.
Marshall, A. (2009). Wild Design: The Ecomimicry Project, North Atlantic Books: Berkeley.
Passino, Kevin M. (2004). Biomimicry for Optimization, Control, and Automation. Springer.
Pyper, W. (2006). Emulating nature: The rise of industrial ecology. Ecos, (129), 22–26.
Smith, J. (2007). It's only natural. The Ecologist, 37(8), 52–55.
Thompson, D'Arcy W., On Growth and Form. Dover 1992 reprint of 1942 2nd ed. (1st ed., 1917).
Vogel, S. (2000). Cats' Paws and Catapults: Mechanical Worlds of Nature and People. Norton.
External links
Biomimetics MIT
Sex, Velcro and Biomimicry with Janine Benyus
Janine Benyus: Biomimicry in Action from TED 2009
Design by Nature - National Geographic
Michael Pawlyn: Using nature's genius in architecture from TED 2010
Robert Full shows how human engineers can learn from animals' tricks from TED 2002
The Fast Draw: Biomimicry from CBS News
Evolutionary biology
Biotechnology
Bioinformatics
Biological engineering
Biophysics
Industrial ecology
Bionics
Water conservation
Renewable energy
Sustainable transport | 0.777974 | 0.995135 | 0.77419 |
Forensic science | Forensic science, also known as criminalistics, is the application of science principles and methods to support legal decision-making in matters of criminal and civil law.
During criminal investigation in particular, it is governed by the legal standards of admissible evidence and criminal procedure. It is a broad field utilizing numerous practices such as the analysis of DNA, fingerprints, bloodstain patterns, firearms, ballistics, toxicology, microscopy and fire debris analysis.
Forensic scientists collect, preserve, and analyze evidence during the course of an investigation. While some forensic scientists travel to the scene of the crime to collect the evidence themselves, others occupy a laboratory role, performing analysis on objects brought to them by other individuals. Others are involved in analysis of financial, banking, or other numerical data for use in financial crime investigation, and can be employed as consultants from private firms, academia, or as government employees.
In addition to their laboratory role, forensic scientists testify as expert witnesses in both criminal and civil cases and can work for either the prosecution or the defense. While any field could technically be forensic, certain sections have developed over time to encompass the majority of forensically related cases.
Etymology
The term forensic stems from the Latin word, forēnsis (3rd declension, adjective), meaning "of a forum, place of assembly". The history of the term originates in Roman times, when a criminal charge meant presenting the case before a group of public individuals in the forum. Both the person accused of the crime and the accuser would give speeches based on their sides of the story. The case would be decided in favor of the individual with the best argument and delivery. This origin is the source of the two modern usages of the word forensic—as a form of legal evidence; and as a category of public presentation.
In modern use, the term forensics is often used in place of "forensic science."
The word "science", is derived from the Latin word for 'knowledge' and is today closely tied to the scientific method, a systematic way of acquiring knowledge. Taken together, forensic science means the use of scientific methods and processes for crime solving.
History
Origins of forensic science and early methods
The ancient world lacked standardized forensic practices, which enabled criminals to escape punishment. Criminal investigations and trials relied heavily on forced confessions and witness testimony. However, ancient sources do contain several accounts of techniques that foreshadow concepts in forensic science developed centuries later.
The first written account of using medicine and entomology to solve criminal cases is attributed to the book of Xi Yuan Lu (translated as Washing Away of Wrongs), written in China in 1248 by Song Ci (, 1186–1249), a director of justice, jail and supervision, during the Song dynasty.
Song Ci introduced regulations concerning autopsy reports to court, how to protect the evidence in the examining process, and explained why forensic workers must demonstrate impartiality to the public. He devised methods for making antiseptic and for promoting the reappearance of hidden injuries to dead bodies and bones (using sunlight and vinegar under a red-oil umbrella); for calculating the time of death (allowing for weather and insect activity); described how to wash and examine the dead body to ascertain the reason for death. At that time the book had described methods for distinguishing between suicide and faked suicide. He wrote the book on forensics stating that all wounds or dead bodies should be examined, not avoided. The book became the first form of literature to help determine the cause of death.
In one of Song Ci's accounts (Washing Away of Wrongs), the case of a person murdered with a sickle was solved by an investigator who instructed each suspect to bring his sickle to one location. (He realized it was a sickle by testing various blades on an animal carcass and comparing the wounds.) Flies, attracted by the smell of blood, eventually gathered on a single sickle. In light of this, the owner of that sickle confessed to the murder. The book also described how to distinguish between a drowning (water in the lungs) and strangulation (broken neck cartilage), and described evidence from examining corpses to determine if a death was caused by murder, suicide or accident.
Methods from around the world involved saliva and examination of the mouth and tongue to determine innocence or guilt, as a precursor to the Polygraph test. In ancient India, some suspects were made to fill their mouths with dried rice and spit it back out. Similarly, in ancient China, those accused of a crime would have rice powder placed in their mouths. In ancient middle-eastern cultures, the accused were made to lick hot metal rods briefly. It is thought that these tests had some validity since a guilty person would produce less saliva and thus have a drier mouth; the accused would be considered guilty if rice was sticking to their mouths in abundance or if their tongues were severely burned due to lack of shielding from saliva.
Education and Training
Initial glance, forensic intelligence may appear as a nascent facet of forensic science facilitated by advancements in information technologies such as computers, databases, and data-flow management software. However, a more profound examination reveals that forensic intelligence represents a genuine and emerging inclination among forensic practitioners to actively participate in investigative and policing strategies. In doing so, it elucidates existing practices within scientific literature, advocating for a paradigm shift from the prevailing conception of forensic science as a conglomerate of disciplines merely aiding the criminal justice system. Instead, it urges a perspective that views forensic science as a discipline studying the informative potential of traces—remnants of criminal activity. Embracing this transformative shift poses a significant challenge for education, necessitating a shift in learners' mindset to accept concepts and methodologies in forensic intelligence.
Recent calls advocating for the integration of forensic scientists into the criminal justice system, as well as policing and intelligence missions, underscore the necessity for the establishment of educational and training initiatives in the field of forensic intelligence. This article contends that a discernible gap exists between the perceived and actual comprehension of forensic intelligence among law enforcement and forensic science managers, positing that this asymmetry can be rectified only through educational interventions
The primary challenge in forensic intelligence education and training is identified as the formulation of programs aimed at heightening awareness, particularly among managers, to mitigate the risk of making suboptimal decisions in information processing. The paper highlights two recent European courses as exemplars of educational endeavors, elucidating lessons learned and proposing future directions.
The overarching conclusion is that the heightened focus on forensic intelligence has the potential to rejuvenate a proactive approach to forensic science, enhance quantifiable efficiency, and foster greater involvement in investigative and managerial decision-making. A novel educational challenge is articulated for forensic science university programs worldwide: a shift in emphasis from a fragmented criminal trace analysis to a more comprehensive security problem-solving approach.
Development of forensic science
In 16th-century Europe, medical practitioners in army and university settings began to gather information on the cause and manner of death. Ambroise Paré, a French army surgeon, systematically studied the effects of violent death on internal organs. Two Italian surgeons, Fortunato Fidelis and Paolo Zacchia, laid the foundation of modern pathology by studying changes that occurred in the structure of the body as the result of disease. In the late 18th century, writings on these topics began to appear. These included A Treatise on Forensic Medicine and Public Health by the French physician Francois Immanuele Fodéré and The Complete System of Police Medicine by the German medical expert Johann Peter Frank.
As the rational values of the Enlightenment era increasingly permeated society in the 18th century, criminal investigation became a more evidence-based, rational procedure − the use of torture to force confessions was curtailed, and belief in witchcraft and other powers of the occult largely ceased to influence the court's decisions. Two examples of English forensic science in individual legal proceedings demonstrate the increasing use of logic and procedure in criminal investigations at the time. In 1784, in Lancaster, John Toms was tried and convicted for murdering Edward Culshaw with a pistol. When the dead body of Culshaw was examined, a pistol wad (crushed paper used to secure powder and balls in the muzzle) found in his head wound matched perfectly with a torn newspaper found in Toms's pocket, leading to the conviction.
In Warwick 1816, a farm laborer was tried and convicted of the murder of a young maidservant. She had been drowned in a shallow pool and bore the marks of violent assault. The police found footprints and an impression from corduroy cloth with a sewn patch in the damp earth near the pool. There were also scattered grains of wheat and chaff. The breeches of a farm labourer who had been threshing wheat nearby were examined and corresponded exactly to the impression in the earth near the pool.
An article appearing in Scientific American in 1885 describes the use of microscopy to distinguish between the blood of two persons in a criminal case in Chicago.
Chromatography
Chromatography is a common technique used in the field of Forensic Science. Chromatography is a method of separating the components of a mixture from a mobile phase. Chromatography is an essential tool used in forensic science, helping analysts identify and compare trace amounts of samples including ignitable liquids, drugs, and biological samples. Many laboratories utilize gas chromatography/mass spectrometry (GC/MS) to examine these kinds of samples; this analysis provides rapid and reliant data to identify samples in question.
Toxicology
A method for detecting arsenious oxide, simple arsenic, in corpses was devised in 1773 by the Swedish chemist, Carl Wilhelm Scheele. His work was expanded upon, in 1806, by German chemist Valentin Ross, who learned to detect the poison in the walls of a victim's stomach. Toxicology, a subfield of forensic chemistry, focuses on detecting and identifying drugs, poisons, and other toxic substances in biological samples. Forensic toxicologists work on cases involving drug overdoses, poisoning, and substance abuse. Their work is critical in determining whether harmful substances play a role in a person’s death or impairment. read more
James Marsh was the first to apply this new science to the art of forensics. He was called by the prosecution in a murder trial to give evidence as a chemist in 1832. The defendant, John Bodle, was accused of poisoning his grandfather with arsenic-laced coffee. Marsh performed the standard test by mixing a suspected sample with hydrogen sulfide and hydrochloric acid. While he was able to detect arsenic as yellow arsenic trisulfide, when it was shown to the jury it had deteriorated, allowing the suspect to be acquitted due to reasonable doubt.
Annoyed by that, Marsh developed a much better test. He combined a sample containing arsenic with sulfuric acid and arsenic-free zinc, resulting in arsine gas. The gas was ignited, and it decomposed to pure metallic arsenic, which, when passed to a cold surface, would appear as a silvery-black deposit. So sensitive was the test, known formally as the Marsh test, that it could detect as little as one-fiftieth of a milligram of arsenic. He first described this test in The Edinburgh Philosophical Journal in 1836.
Ballistics and firearms
Ballistics is "the science of the motion of projectiles in flight". In forensic science, analysts examine the patterns left on bullets and cartridge casings after being ejected from a weapon. When fired, a bullet is left with indentations and markings that are unique to the barrel and firing pin of the firearm that ejected the bullet. This examination can help scientists identify possible makes and models of weapons connected to a crime.
Henry Goddard at Scotland Yard pioneered the use of bullet comparison in 1835. He noticed a flaw in the bullet that killed the victim and was able to trace this back to the mold that was used in the manufacturing process.
Anthropometry
The French police officer Alphonse Bertillon was the first to apply the anthropological technique of anthropometry to law enforcement, thereby creating an identification system based on physical measurements. Before that time, criminals could be identified only by name or photograph. Dissatisfied with the ad hoc methods used to identify captured criminals in France in the 1870s, he began his work on developing a reliable system of anthropometrics for human classification.
Bertillon created many other forensics techniques, including forensic document examination, the use of galvanoplastic compounds to preserve footprints, ballistics, and the dynamometer, used to determine the degree of force used in breaking and entering. Although his central methods were soon to be supplanted by fingerprinting, "his other contributions like the mug shot and the systematization of crime-scene photography remain in place to this day."
Fingerprints
Sir William Herschel was one of the first to advocate the use of fingerprinting in the identification of criminal suspects. While working for the Indian Civil Service, he began to use thumbprints on documents as a security measure to prevent the then-rampant repudiation of signatures in 1858.
In 1877 at Hooghly (near Kolkata), Herschel instituted the use of fingerprints on contracts and deeds, and he registered government pensioners' fingerprints to prevent the collection of money by relatives after a pensioner's death.
In 1880, Henry Faulds, a Scottish surgeon in a Tokyo hospital, published his first paper on the subject in the scientific journal Nature, discussing the usefulness of fingerprints for identification and proposing a method to record them with printing ink. He established their first classification and was also the first to identify fingerprints left on a vial. Returning to the UK in 1886, he offered the concept to the Metropolitan Police in London, but it was dismissed at that time.
Faulds wrote to Charles Darwin with a description of his method, but, too old and ill to work on it, Darwin gave the information to his cousin, Francis Galton, who was interested in anthropology. Having been thus inspired to study fingerprints for ten years, Galton published a detailed statistical model of fingerprint analysis and identification and encouraged its use in forensic science in his book Finger Prints. He had calculated that the chance of a "false positive" (two different individuals having the same fingerprints) was about 1 in 64 billion.
Juan Vucetich, an Argentine chief police officer, created the first method of recording the fingerprints of individuals on file. In 1892, after studying Galton's pattern types, Vucetich set up the world's first fingerprint bureau. In that same year, Francisca Rojas of Necochea was found in a house with neck injuries whilst her two sons were found dead with their throats cut. Rojas accused a neighbour, but despite brutal interrogation, this neighbour would not confess to the crimes. Inspector Alvarez, a colleague of Vucetich, went to the scene and found a bloody thumb mark on a door. When it was compared with Rojas' prints, it was found to be identical with her right thumb. She then confessed to the murder of her sons.
A Fingerprint Bureau was established in Calcutta (Kolkata), India, in 1897, after the Council of the Governor General approved a committee report that fingerprints should be used for the classification of criminal records. Working in the Calcutta Anthropometric Bureau, before it became the Fingerprint Bureau, were Azizul Haque and Hem Chandra Bose. Haque and Bose were Indian fingerprint experts who have been credited with the primary development of a fingerprint classification system eventually named after their supervisor, Sir Edward Richard Henry. The Henry Classification System, co-devised by Haque and Bose, was accepted in England and Wales when the first United Kingdom Fingerprint Bureau was founded in Scotland Yard, the Metropolitan Police headquarters, London, in 1901. Sir Edward Richard Henry subsequently achieved improvements in dactyloscopy.
In the United States, Henry P. DeForrest used fingerprinting in the New York Civil Service in 1902, and by December 1905, New York City Police Department Deputy Commissioner Joseph A. Faurot, an expert in the Bertillon system and a fingerprint advocate at Police Headquarters, introduced the fingerprinting of criminals to the United States.
Uhlenhuth test
The Uhlenhuth test, or the antigen–antibody precipitin test for species, was invented by Paul Uhlenhuth in 1901 and could distinguish human blood from animal blood, based on the discovery that the blood of different species had one or more characteristic proteins. The test represented a major breakthrough and came to have tremendous importance in forensic science. The test was further refined for forensic use by the Swiss chemist Maurice Müller in the year 1960s.
DNA
Forensic DNA analysis was first used in 1984. It was developed by Sir Alec Jeffreys, who realized that variation in the genetic sequence could be used to identify individuals and to tell individuals apart from one another. The first application of DNA profiles was used by Jeffreys in a double murder mystery in the small English town of Narborough, Leicestershire, in 1985. A 15-year-old school girl by the name of Lynda Mann was raped and murdered in Carlton Hayes psychiatric hospital. The police did not find a suspect but were able to obtain a semen sample.
In 1986, Dawn Ashworth, 15 years old, was also raped and strangled in the nearby village of Enderby. Forensic evidence showed that both killers had the same blood type. Richard Buckland became the suspect because he worked at Carlton Hayes psychiatric hospital, had been spotted near Dawn Ashworth's murder scene and knew unreleased details about the body. He later confessed to Dawn's murder but not Lynda's. Jefferys was brought into the case to analyze the semen samples. He concluded that there was no match between the samples and Buckland, who became the first person to be exonerated using DNA. Jefferys confirmed that the DNA profiles were identical for the two murder semen samples. To find the perpetrator, DNA samples from the entire male population, more than 4,000 aged from 17 to 34, of the town were collected. They all were compared to semen samples from the crime. A friend of Colin Pitchfork was heard saying that he had given his sample to the police claiming to be Colin. Colin Pitchfork was arrested in 1987 and it was found that his DNA profile matched the semen samples from the murder.
Because of this case, DNA databases were developed. There is the national (FBI) and international databases as well as the European countries (ENFSI: European Network of Forensic Science Institutes). These searchable databases are used to match crime scene DNA profiles to those already in a database.
Maturation
By the turn of the 20th century, the science of forensics had become largely established in the sphere of criminal investigation. Scientific and surgical investigation was widely employed by the Metropolitan Police during their pursuit of the mysterious Jack the Ripper, who had killed a number of women in the 1880s. This case is a watershed in the application of forensic science. Large teams of policemen conducted house-to-house inquiries throughout Whitechapel. Forensic material was collected and examined. Suspects were identified, traced and either examined more closely or eliminated from the inquiry. Police work follows the same pattern today. Over 2000 people were interviewed, "upwards of 300" people were investigated, and 80 people were detained.
The investigation was initially conducted by the Criminal Investigation Department (CID), headed by Detective Inspector Edmund Reid. Later, Detective Inspectors Frederick Abberline, Henry Moore, and Walter Andrews were sent from Central Office at Scotland Yard to assist. Initially, butchers, surgeons and physicians were suspected because of the manner of the mutilations. The alibis of local butchers and slaughterers were investigated, with the result that they were eliminated from the inquiry. Some contemporary figures thought the pattern of the murders indicated that the culprit was a butcher or cattle drover on one of the cattle boats that plied between London and mainland Europe. Whitechapel was close to the London Docks, and usually such boats docked on Thursday or Friday and departed on Saturday or Sunday. The cattle boats were examined, but the dates of the murders did not coincide with a single boat's movements, and the transfer of a crewman between boats was also ruled out.
At the end of October, Robert Anderson asked police surgeon Thomas Bond to give his opinion on the extent of the murderer's surgical skill and knowledge. The opinion offered by Bond on the character of the "Whitechapel murderer" is the earliest surviving offender profile. Bond's assessment was based on his own examination of the most extensively mutilated victim and the post mortem notes from the four previous canonical murders. In his opinion the killer must have been a man of solitary habits, subject to "periodical attacks of homicidal and erotic mania", with the character of the mutilations possibly indicating "satyriasis". Bond also stated that "the homicidal impulse may have developed from a revengeful or brooding condition of the mind, or that religious mania may have been the original disease but I do not think either hypothesis is likely".
Handbook for Coroners, police officials, military policemen was written by the Austrian criminal jurist Hans Gross in 1893, and is generally acknowledged as the birth of the field of criminalistics. The work combined in one system fields of knowledge that had not been previously integrated, such as psychology and physical science, and which could be successfully used against crime. Gross adapted some fields to the needs of criminal investigation, such as crime scene photography. He went on to found the Institute of Criminalistics in 1912, as part of the University of Graz' Law School. This Institute was followed by many similar institutes all over the world.
In 1909, Archibald Reiss founded the Institut de police scientifique of the University of Lausanne (UNIL), the first school of forensic science in the world. Dr. Edmond Locard, became known as the "Sherlock Holmes of France". He formulated the basic principle of forensic science: "Every contact leaves a trace", which became known as Locard's exchange principle. In 1910, he founded what may have been the first criminal laboratory in the world, after persuading the Police Department of Lyon (France) to give him two attic rooms and two assistants.
Symbolic of the newfound prestige of forensics and the use of reasoning in detective work was the popularity of the fictional character Sherlock Holmes, written by Arthur Conan Doyle in the late 19th century. He remains a great inspiration for forensic science, especially for the way his acute study of a crime scene yielded small clues as to the precise sequence of events. He made great use of trace evidence such as shoe and tire impressions, as well as fingerprints, ballistics and handwriting analysis, now known as questioned document examination. Such evidence is used to test theories conceived by the police, for example, or by the investigator himself. All of the techniques advocated by Holmes later became reality, but were generally in their infancy at the time Conan Doyle was writing. In many of his reported cases, Holmes frequently complains of the way the crime scene has been contaminated by others, especially by the police, emphasising the critical importance of maintaining its integrity, a now well-known feature of crime scene examination. He used analytical chemistry for blood residue analysis as well as toxicology examination and determination for poisons. He used ballistics by measuring bullet calibres and matching them with a suspected murder weapon.
Late 19th – early 20th century figures
Hans Gross applied scientific methods to crime scenes and was responsible for the birth of criminalistics.
Edmond Locard expanded on Gross' work with Locard's Exchange Principle which stated "whenever two objects come into contact with one another, materials are exchanged between them". This means that every contact by a criminal leaves a trace.
Alexander Lacassagne, who taught Locard, produced autopsy standards on actual forensic cases.
Alphonse Bertillon was a French criminologist and founder of Anthropometry (scientific study of measurements and proportions of the human body). He used anthropometry for identification, stating that, since each individual is unique, by measuring aspects of physical difference there could be a personal identification system. He created the Bertillon System around 1879, a way of identifying criminals and citizens by measuring 20 parts of the body. In 1884, over 240 repeat offenders were caught using the Bertillon system, but the system was largely superseded by fingerprinting.
Frances Glessner Lee, known as "the mother of forensic science", was instrumental in the development of forensic science in the US. She lobbied to have coroners replaced by medical professionals, endowed the Harvard Associates in Police Science, and conducted many seminars to educate homicide investigators. She also created the Nutshell Studies of Unexplained Death, intricate crime scene dioramas used to train investigators, which are still in use today.
20th century
Later in the 20th century several British pathologists, Mikey Rochman, Francis Camps, Sydney Smith and Keith Simpson pioneered new forensic science methods. Alec Jeffreys pioneered the use of DNA profiling in forensic science in 1984. He realized the scope of DNA fingerprinting, which uses variations in the genetic code to identify individuals. The method has since become important in forensic science to assist police detective work, and it has also proved useful in resolving paternity and immigration disputes. DNA fingerprinting was first used as a police forensic test to identify the rapist and killer of two teenagers, Lynda Mann and Dawn Ashworth, who were both murdered in Narborough, Leicestershire, in 1983 and 1986 respectively. Colin Pitchfork was identified and convicted of murder after samples taken from him matched semen samples taken from the two dead girls.
Forensic science has been fostered by a number of national and international forensic science learned bodies including the American Academy of Forensic Sciences (founded 1948), publishers of the Journal of Forensic Sciences; the Canadian Society of Forensic Science (founded 1953), publishers of the Journal of the Canadian Society of Forensic Science; the Chartered Society of Forensic Sciences, (founded 1959), then known as the Forensic Science Society, publisher of Science & Justice; the British Academy of Forensic Sciences (founded 1960), publishers of Medicine, Science and the Law; the Australian Academy of Forensic Sciences (founded 1967), publishers of the Australian Journal of Forensic Sciences; and the European Network of Forensic Science Institutes (founded 1995).
21st century
In the past decade, documenting forensics scenes has become more efficient. Forensic scientists have started using laser scanners, drones and photogrammetry to obtain 3D point clouds of accidents or crime scenes. Reconstruction of an accident scene on a highway using drones involves data acquisition time of only 10–20 minutes and can be performed without shutting down traffic. The results are not just accurate, in centimeters, for measurement to be presented in court but also easy to digitally preserve in the long term.
Now, in the 21st century, much of forensic science's future is up for discussion. The National Institute of Standards and Technology (NIST) has several forensic science-related programs: CSAFE, a NIST Center of Excellence in Forensic Science, the National Commission on Forensic Science (now concluded), and administration of the Organization of Scientific Area Committees for Forensic Science (OSAC). One of the more recent additions by NIST is a document called NISTIR-7941, titled "Forensic Science Laboratories: Handbook for Facility Planning, Design, Construction, and Relocation". The handbook provides a clear blueprint for approaching forensic science. The details even include what type of staff should be hired for certain positions.
Subdivisions
Art forensics concerns the art authentication cases to help research the work's authenticity. Art authentication methods are used to detect and identify forgery, faking and copying of art works, e.g. paintings.
Bloodstain pattern analysis is the scientific examination of blood spatter patterns found at a crime scene to reconstruct the events of the crime.
Comparative forensics is the application of visual comparison techniques to verify similarity of physical evidence. This includes fingerprint analysis, toolmark analysis, and ballistic analysis.
Computational forensics concerns the development of algorithms and software to assist forensic examination.
Criminalistics is the application of various sciences to answer questions relating to examination and comparison of biological evidence, trace evidence, impression evidence (such as fingerprints, footwear impressions, and tire tracks), controlled substances, ballistics, firearm and toolmark examination, and other evidence in criminal investigations. In typical circumstances, evidence is processed in a crime lab.
Digital forensics is the application of proven scientific methods and techniques in order to recover data from electronic / digital media. Digital Forensic specialists work in the field as well as in the lab.
Ear print analysis is used as a means of forensic identification intended as an identification tool similar to fingerprinting. An earprint is a two-dimensional reproduction of the parts of the outer ear that have touched a specific surface (most commonly the helix, antihelix, tragus and antitragus).
Election forensics is the use of statistics to determine if election results are normal or abnormal. It is also used to look into and detect the cases concerning gerrymandering.
Forensic accounting is the study and interpretation of accounting evidence, financial statement namely: Balance sheet, Income statement, Cash flow statement.
Forensic aerial photography is the study and interpretation of aerial photographic evidence.
Forensic anthropology is the application of physical anthropology in a legal setting, usually for the recovery and identification of skeletonized human remains.
Forensic archaeology is the application of a combination of archaeological techniques and forensic science, typically in law enforcement.
Forensic astronomy uses methods from astronomy to determine past celestial constellations for forensic purposes.
Forensic botany is the study of plant life in order to gain information regarding possible crimes.
Forensic chemistry is the study of detection and identification of illicit drugs, accelerants used in arson cases, explosive and gunshot residue.
Forensic dactyloscopy is the study of fingerprints.
Forensic document examination or questioned document examination answers questions about a disputed document using a variety of scientific processes and methods. Many examinations involve a comparison of the questioned document, or components of the document, with a set of known standards. The most common type of examination involves handwriting, whereby the examiner tries to address concerns about potential authorship.
Forensic DNA analysis takes advantage of the uniqueness of an individual's DNA to answer forensic questions such as paternity/maternity testing and placing a suspect at a crime scene, e.g. in a rape investigation.
Forensic engineering is the scientific examination and analysis of structures and products relating to their failure or cause of damage.
Forensic entomology deals with the examination of insects in, on and around human remains to assist in determination of time or location of death. It is also possible to determine if the body was moved after death using entomology.
Forensic geology deals with trace evidence in the form of soils, minerals and petroleum.
Forensic geomorphology is the study of the ground surface to look for potential location(s) of buried object(s).
Forensic geophysics is the application of geophysical techniques such as radar for detecting objects hidden underground or underwater.
Forensic intelligence process starts with the collection of data and ends with the integration of results within into the analysis of crimes under investigation.
Forensic interviews are conducted using the science of professionally using expertise to conduct a variety of investigative interviews with victims, witnesses, suspects or other sources to determine the facts regarding suspicions, allegations or specific incidents in either public or private sector settings.
Forensic histopathology is the application of histological techniques and examination to forensic pathology practice.
Forensic limnology is the analysis of evidence collected from crime scenes in or around fresh-water sources. Examination of biological organisms, in particular diatoms, can be useful in connecting suspects with victims.
Forensic linguistics deals with issues in the legal system that requires linguistic expertise.
Forensic meteorology is a site-specific analysis of past weather conditions for a point of loss.
Forensic metrology is the application of metrology to assess the reliability of scientific evidence obtained through measurements
Forensic microbiology is the study of the necrobiome.
Forensic nursing is the application of Nursing sciences to abusive crimes, like child abuse, or sexual abuse. Categorization of wounds and traumas, collection of bodily fluids and emotional support are some of the duties of forensic nurses.
Forensic odontology is the study of the uniqueness of dentition, better known as the study of teeth.
Forensic optometry is the study of glasses and other eyewear relating to crime scenes and criminal investigations.
Forensic pathology is a field in which the principles of medicine and pathology are applied to determine a cause of death or injury in the context of a legal inquiry.
Forensic podiatry is an application of the study of feet footprint or footwear and their traces to analyze scene of crime and to establish personal identity in forensic examinations.
Forensic psychiatry is a specialized branch of psychiatry as applied to and based on scientific criminology.
Forensic psychology is the study of the mind of an individual, using forensic methods. Usually it determines the circumstances behind a criminal's behavior.
Forensic seismology is the study of techniques to distinguish the seismic signals generated by underground nuclear explosions from those generated by earthquakes.
Forensic serology is the study of the body fluids.
Forensic social work is the specialist study of social work theories and their applications to a clinical, criminal justice or psychiatric setting. Practitioners of forensic social work connected with the criminal justice system are often termed Social Supervisors, whilst the remaining use the interchangeable titles forensic social worker, approved mental health professional or forensic practitioner and they conduct specialist assessments of risk, care planning and act as an officer of the court.
Forensic toxicology is the study of the effect of drugs and poisons on/in the human body.
Forensic video analysis is the scientific examination, comparison and evaluation of video in legal matters.
Mobile device forensics is the scientific examination and evaluation of evidence found in mobile phones, e.g. Call History and Deleted SMS, and includes SIM Card Forensics.
Trace evidence analysis is the analysis and comparison of trace evidence including glass, paint, fibres and hair (e.g., using micro-spectrophotometry).
Wildlife forensic science applies a range of scientific disciplines to legal cases involving non-human biological evidence, to solve crimes such as poaching, animal abuse, and trade in endangered species.
Questionable techniques
Some forensic techniques, believed to be scientifically sound at the time they were used, have turned out later to have much less scientific merit or none. Some such techniques include:
Comparative bullet-lead analysis was used by the FBI for over four decades, starting with the John F. Kennedy assassination in 1963. The theory was that each batch of ammunition possessed a chemical makeup so distinct that a bullet could be traced back to a particular batch or even a specific box. Internal studies and an outside study by the National Academy of Sciences found that the technique was unreliable due to improper interpretation, and the FBI abandoned the test in 2005.
Forensic dentistry has come under fire: in at least three cases bite-mark evidence has been used to convict people of murder who were later freed by DNA evidence. A 1999 study by a member of the American Board of Forensic Odontology found a 63 percent rate of false identifications and is commonly referenced within online news stories and conspiracy websites. The study was based on an informal workshop during an ABFO meeting, which many members did not consider a valid scientific setting. The theory is that each person has a unique and distinctive set of teeth, which leave a pattern after biting someone. They analyze the dental characteristics such as size, shape, and arch form.
Police Access to Genetic Genealogy Databases: There are privacy concerns with the police being able to access personal genetic data that is on genealogy services. Individuals can become criminal informants to their own families or to themselves simply by participating in genetic genealogy databases. The Combined DNA Index System (CODIS) is a database that the FBI uses to hold genetic profiles of all known felons, misdemeanants, and arrestees. Some people argue that individuals who are using genealogy databases should have an expectation of privacy in their data that is or may be violated by genetic searches by law enforcement. These different services have warning signs about potential third parties using their information, but most individuals do not read the agreement thoroughly. According to a study by Christi Guerrini, Jill Robinson, Devan Petersen, and Amy McGuire, they found that the majority of the people who took the survey support police searches of genetic websites that identify genetic relatives. People who responded to the survey are more supportive of police activities using genetic genealogy when it is for the purpose of identifying offenders of violent crimes, suspects of crimes against children or missing people. The data from the surveys that were given show that individuals are not concerned about police searches using personal genetic data if it is justified. It was found in this study that offenders are disproportionally low-income and black and the average person of genetic testing is wealthy and white. The results from the study had different results. In 2016, there was a survey called the National Crime Victimization Survey (NCVS) that was provided by the US Bureau of Justice Statistics. In that survey, it was found that 1.3% of people aged 12 or older were victims of violent crimes, and 8.85 of households were victims of property crimes. There were some issues with this survey though. The NCVS produces only the annual estimates of victimization. The survey that Christi Guerrini, Jill Robinson, Devan Petersen, and Amy McGuire produced asked the participants about the incidents of victimization over one's lifetime. Their survey also did not restrict other family members to one household. Around 25% of people who responded to the survey said that they have had family members that have been employed by law enforcement which includes security guards and bailiffs. Throughout these surveys, it has been found that there is public support for law enforcement to access genetic genealogy databases.
Litigation science
"Litigation science" describes analysis or data developed or produced expressly for use in a trial versus those produced in the course of independent research. This distinction was made by the U.S. 9th Circuit Court of Appeals when evaluating the admissibility of experts.
This uses demonstrative evidence, which is evidence created in preparation of trial by attorneys or paralegals.
Demographics
In the United States there are over 17,200 forensic science technicians as of 2019.
Media impact
Real-life crime scene investigators and forensic scientists warn that popular television shows do not give a realistic picture of the work, often wildly distorting its nature, and exaggerating the ease, speed, effectiveness, drama, glamour, influence and comfort level of their jobs—which they describe as far more mundane, tedious and boring.
Some claim these modern TV shows have changed individuals' expectations of forensic science, sometimes unrealistically—an influence termed the "CSI effect".
Further, research has suggested that public misperceptions about criminal forensics can create, in the mind of a juror, unrealistic expectations of forensic evidence—which they expect to see before convicting—implicitly biasing the juror towards the defendant. Citing the "CSI effect," at least one researcher has suggested screening jurors for their level of influence from such TV programs.
Controversies
Questions about certain areas of forensic science, such as fingerprint evidence and the assumptions behind these disciplines have been brought to light in some publications including the New York Post. The article stated that "No one has proved even the basic assumption: That everyone's fingerprint is unique." The article also stated that "Now such assumptions are being questioned—and with it may come a radical change in how forensic science is used by police departments and prosecutors." Law professor Jessica Gabel said on NOVA that forensic science "lacks the rigors, the standards, the quality controls and procedures that we find, usually, in science".
The National Institute of Standards and Technology has reviewed the scientific foundations of bite-mark analysis used in forensic science. Bite mark analysis is a forensic science technique that analyzes the marks on the victim's skin compared to the suspects teeth. NIST reviewed the findings of the National Academies of Sciences, Engineering, and Medicine 2009 study. The National Academics of Sciences, Engineering, and Medicine conducted research to address the issues of reliability, accuracy, and reliability of bitemark analysis, where they concluded that there is a lack of sufficient scientific foundation to support the data. Yet the technique is still legal to use in court as evidence. NIST funded a 2019 meeting that consisted of dentists, lawyers, researchers and others to address the gaps in this field.
In the US, on 25 June 2009, the Supreme Court issued a 5-to-4 decision in Melendez-Diaz v. Massachusetts stating that crime laboratory reports may not be used against criminal defendants at trial unless the analysts responsible for creating them give testimony and subject themselves to cross-examination. The Supreme Court cited the National Academies of Sciences report Strengthening Forensic Science in the United States in their decision. Writing for the majority, Justice Antonin Scalia referred to the National Research Council report in his assertion that "Forensic evidence is not uniquely immune from the risk of manipulation."
In the US, another area of forensic science that has come under question in recent years is the lack of laws requiring the accreditation of forensic labs. Some states require accreditation, but some states do not. Because of this, many labs have been caught performing very poor work resulting in false convictions or acquittals. For example, it was discovered after an audit of the Houston Police Department in 2002 that the lab had fabricated evidence which led George Rodriguez being convicted of raping a fourteen-year-old girl. The former director of the lab, when asked, said that the total number of cases that could have been contaminated by improper work could be in the range of 5,000 to 10,000.
The Innocence Project database of DNA exonerations shows that many wrongful convictions contained forensic science errors. According to the Innocence project and the US Department of Justice, forensic science has contributed to about 39 percent to 46 percent of wrongful convictions. As indicated by the National Academy of Sciences report Strengthening Forensic Sciences in the United States, part of the problem is that many traditional forensic sciences have never been empirically validated; and part of the problem is that all examiners are subject to forensic confirmation biases and should be shielded from contextual information not relevant to the judgment they make.
Many studies have discovered a difference in rape-related injuries reporting based on race, with white victims reporting a higher frequency of injuries than black victims. However, since current forensic examination techniques may not be sensitive to all injuries across a range of skin colors, more research needs to be conducted to understand if this trend is due to skin confounding healthcare providers when examining injuries or if darker skin extends a protective element. In clinical practice, for patients with darker skin, one study recommends that attention must be paid to the thighs, labia majora, posterior fourchette and fossa navicularis, so that no rape-related injuries are missed upon close examination.
Forensic science and humanitarian work
The International Committee of the Red Cross (ICRC) uses forensic science for humanitarian purposes to clarify the fate of missing persons after armed conflict, disasters or migration, and is one of the services related to Restoring Family Links and Missing Persons. Knowing what has happened to a missing relative can often make it easier to proceed with the grieving process and move on with life for families of missing persons.
Forensic science is used by various other organizations to clarify the fate and whereabouts of persons who have gone missing. Examples include the NGO Argentine Forensic Anthropology Team, working to clarify the fate of people who disappeared during the period of the 1976–1983 military dictatorship. The International Commission on Missing Persons (ICMP) used forensic science to find missing persons, for example after the conflicts in the Balkans.
Recognising the role of forensic science for humanitarian purposes, as well as the importance of forensic investigations in fulfilling the state's responsibilities to investigate human rights violations, a group of experts in the late-1980s devised a UN Manual on the Prevention and Investigation of Extra-Legal, Arbitrary and Summary Executions, which became known as the Minnesota Protocol. This document was revised and re-published by the Office of the High Commissioner for Human Rights in 2016.
See also
(forensic paleography)
(RSID)
References
Bibliography
Anil Aggrawal's Internet Journal of Forensic Medicine and Toxicology .
Forensic Magazine – Forensicmag.com.
Forensic Science Communications, an open access journal of the FBI.
Forensic sciences international – An international journal dedicated to the applications of medicine and science in the administration of justice – – Elsevier
"The Real CSI", PBS Frontline documentary, 17 April 2012.
Baden, Michael; Roach, Marion. Dead Reckoning: The New Science of Catching Killers, Simon & Schuster, 2001. .
Bartos, Leah, "No Forensic Background? No Problem", ProPublica, 17 April 2012.
Guatelli-Steinberg, Debbie; Mitchell, John C. Structure Magazine no. 40, "RepliSet: High Resolution Impressions of the Teeth of Human Ancestors".
Holt, Cynthia. Guide to Information Sources in the Forensic Sciences Libraries Unlimited, 2006. .
Jamieson, Allan; Moenssens, Andre (eds). Wiley Encyclopedia of Forensic Science John Wiley & Sons Ltd, 2009. . Online version.
Kind, Stuart; Overman, Michael. Science Against Crime Doubleday, 1972. .
Lewis, Peter Rhys; Gagg Colin; Reynolds, Ken. Forensic Materials Engineering: Case Studies CRC Press, 2004.
Nickell, Joe; Fischer, John F. Crime Science: Methods of Forensic Detection, University Press of Kentucky, 1999. .
Owen, D. (2000). Hidden Evidence: The Story of Forensic Science and how it Helped to Solve 40 of the World's Toughest Crimes Quintet Publishing, London. .
Quinche, Nicolas, and Margot, Pierre, "Coulier, Paul-Jean (1824–1890): A precursor in the history of fingermark detection and their potential use for identifying their source (1863)", Journal of forensic identification (Californie), 60 (2), March–April 2010, pp. 129–134.
Silverman, Mike; Thompson, Tony. Written in Blood: A History of Forensic Science. 2014.
External links
Forensic educational resources
Applied sciences
Criminology
Heuristics
Medical aspects of death
Chromatography | 0.77491 | 0.999028 | 0.774157 |
Computable general equilibrium | Computable general equilibrium (CGE) models are a class of economic models that use actual economic data to estimate how an economy might react to changes in policy, technology or other external factors. CGE models are also referred to as AGE (applied general equilibrium) models. A CGE model consists of equations describing model variables and a database (usually very detailed) consistent with these model equations. The equations tend to be neoclassical in spirit, often assuming cost-minimizing behaviour by producers, average-cost pricing, and household demands based on optimizing behaviour.
CGE models are useful whenever we wish to estimate the effect of changes in one part of the economy upon the rest. They have been used widely to analyse trade policy. More recently, CGE has been a popular way to estimate the economic effects of measures to reduce greenhouse gas emissions.
Main features
A CGE model consists of equations describing model variables and a database (usually very detailed) consistent with these model equations. The equations tend to be neoclassical in spirit, often assuming cost-minimizing behaviour by producers, average-cost pricing, and household demands based on optimizing behaviour. However, most CGE models conform only loosely to the theoretical general equilibrium paradigm. For example, they may allow for:
non-market clearing, especially for labour (unemployment) or for commodities (inventories)
imperfect competition (e.g., monopoly pricing)
demands not influenced by price (e.g., government demands)
CGE models always contain more variables than equations—so some variables must be set outside the model. These variables are termed exogenous; the remainder, determined by the model, is called endogenous. The choice of which variables are to be exogenous is called the model closure, and may give rise to controversy. For example, some modelers hold employment and the trade balance fixed; others allow these to vary. Variables defining technology, consumer tastes, and government instruments (such as tax rates) are usually exogenous.
A CGE model database consists of:
tables of transaction values, showing, for example, the value of coal used by the iron industry. Usually the database is presented as an input-output table or as a social accounting matrix (SAM). In either case, it covers the whole economy of a country (or even the whole world), and distinguishes a number of sectors, commodities, primary factors and perhaps types of households. Sectoral coverage ranges from relatively simple representations of capital, labor and intermediates to highly detailed representations of specific sub-sectors (e.g., the electricity sector in GTAP-Power.)
elasticities: dimensionless parameters that capture behavioural response. For example, export demand elasticities specify by how much export volumes might fall if export prices went up. Other elasticities may belong to the constant elasticity of substitution class. Amongst these are Armington elasticities, which show whether products of different countries are close substitutes, and elasticities measuring how easily inputs to production may be substituted for one another. Income elasticity of demand shows how household demands respond to income changes.
History
CGE models are descended from the input–output models pioneered by Wassily Leontief, but assign a more important role to prices. Thus, where Leontief assumed that, say, a fixed amount of labour was required to produce a ton of iron, a CGE model would normally allow wage levels to (negatively) affect labour demands.
CGE models derive too from the models for planning the economies of poorer countries constructed (usually by a foreign expert) from 1960 onwards. Compared to the Leontief model, development planning models focused more on constraints or shortages—of skilled labour, capital, or foreign exchange.
CGE modelling of richer economies descends from Leif Johansen's 1960 MSG model of Norway, and the static model developed by the Cambridge Growth Project in the UK. Both models were pragmatic in flavour, and traced variables through time. The Australian MONASH model is a modern representative of this class. Perhaps the first CGE model similar to those of today was that of Taylor and Black (1974).
Areas of use
CGE models are useful whenever we wish to estimate the effect of changes in one part of the economy upon the rest. For example, a tax on flour might affect bread prices, the CPI, and hence perhaps wages and employment. They have been used widely to analyse trade policy. More recently, CGE has been a popular way to estimate the economic effects of measures to reduce greenhouse gas emissions.
Trade policy
CGE models have been used widely to analyse trade policy. Today there are many CGE models of different countries. One of the most well-known CGE models is global: the GTAP model of world trade.
Developing economies
CGE models are useful to model the economies of countries for which time series data are scarce or not relevant (perhaps because of disturbances such as regime changes). Here, strong, reasonable, assumptions embedded in the model must replace historical evidence. Thus developing economies are often analysed using CGE models, such as those based on the IFPRI template model.
Climate policy
CGE models can specify consumer and producer behaviour and ‘simulate’ effects of climate policy on various economic outcomes. They can show economic gains and losses across different groups (e.g., households that differ in income, or in different regions). The equations include assumptions about the behavioural response of different groups. By optimising the prices paid for various outputs the direct burdens are shifted from one taxpayer to another.
Comparative-static and dynamic CGE models
Many CGE models are comparative static: they model the reactions of the economy at only one point in time. For policy analysis, results from such a model are often interpreted as showing the reaction of the economy in some future period to one or a few external shocks or policy changes. That is, the results show the difference (usually reported in percent change form) between two alternative future states (with and without the policy shock). The process of adjustment to the new equilibrium, in particular the reallocation of labor and capital across sectors, usually is not explicitly represented in such a model.
In contrast, long-run models focus on adjustments to the underlying resource base when modeling policy changes. This can include dynamic adjustment to the labor supply, adjustments in installed and overall capital stocks, and even adjustment to overall productivity and market structure. There are two broad approaches followed in the policy literature to such long-run adjustment. One involves what is called "comparative steady state" analysis. Under such an approach, long-run or steady-state closure rules are used, under either forward-looking or recursive dynamic behavior, to solve for long-run adjustments.
The alternative approach involves explicit modeling of dynamic adjustment paths. These models can seem more realistic, but are more challenging to construct and solve. They require for instance that future changes are predicted for all exogenous variables, not just those affected by a possible policy change. The dynamic elements may arise from partial adjustment processes or from stock/flow accumulation relations: between capital stocks and investment, and between foreign debt and trade deficits. However there is a potential consistency problem because the variables that change from one equilibrium solution to the next are not necessarily consistent with each other during the period of change. The modeling of the path of adjustment may involve forward-looking expectations, where agents' expectations depend on the future state of the economy and it is necessary to solve for all periods simultaneously, leading to full multi-period dynamic CGE models. An alternative is recursive dynamics. Recursive-dynamic CGE models are those that can be solved sequentially (one period at a time). They assume that behaviour depends only on current and past states of the economy. Recursive dynamic models where a single period is solved for, comparative steady-state analysis, is a special case of recursive dynamic modeling over what can be multiple periods.
Techniques
Early CGE models were often solved by a program custom-written for that particular model. Models were expensive to construct and sometimes appeared as a 'black box' to outsiders. Now, most CGE models are formulated and solved using one of the GAMS or GEMPACK software systems.
AMPL, Excel and MATLAB are also used. Use of such systems has lowered the cost of entry to CGE modelling; allowed model simulations to be independently replicated; and increased the transparency of the models.
See also
Macroeconomic model
References
Further reading
Adelman, Irma and Sherman Robinson (1978). Income Distribution Policy in Developing Countries: A Case Study of Korea, Stanford University Press
Baldwin, Richard E., and Joseph F. Francois, eds. Dynamic Issues in Commercial Policy Analysis. Cambridge University Press, 1999.
Bouët, Antoine (2008). The Expected Benefits of Trade Liberalization for World Income and Development: Opening the "Black Box" of Global Trade Modeling
Burfisher, Mary, Introduction to Computable General Equilibrium Models, Cambridge University Press: Cambridge, 2011,
Cardenete, M. Alejandro, Guerra, Ana-Isabel and Sancho, Ferran (2012). Applied General Equilibrium: An Introduction. Springer
Corong, Erwin L.; et al. (2017). "The Standard GTAP Model, Version 7". Journal of Global Economic Analysis. 2 (1): 1–119.
Dervis, Kemal; Jaime de Melo and Sherman Robinson (1982). General Equilibrium Models for Development Policy. Cambridge University Press
Dixon, Peter; Brian Parmenter; John Sutton and Dave Vincent (1982). ORANI: A Multisectoral Model of the Australian Economy, North-Holland
Dixon, Peter; Brian Parmenter; Alan Powell and Peter Wilcoxen (1992). Notes and Problems in Applied General Equilibrium Economics, North Holland
Dixon, Peter (2006). Evidence-based Trade Policy Decision Making in Australia and the Development of Computable General Equilibrium Modelling, CoPS/IMPACT Working Paper Number G-163
Dixon, Peter and Dale W. Jorgenson, ed. (2013). Handbook of Computable General Equilibrium Modeling, vols. 1A and 1B, North Holland,
Ginsburgh, Victor and Michiel Keyzer (1997). The Structure of Applied General Equilibrium Models, MIT Press
Hertel, Thomas, Global Trade Analysis: Modeling and Applications (Modelling and Applications), Cambridge University Press: Cambridge, 1999,
Kehoe, Patrick J. and Timothy J. Kehoe (1994) "A Primer on Static Applied General Equilibrium Models", Federal Reserve Bank of Minneapolis Quarterly Review, 18(2)
Kehoe, Timothy J. and Edward C. Prescott (1995) Edited volume on "Applied General Equilibrium", Economic Theory, 6
Lanz, Bruno and Rutherford, Thomas F. (2016) "GTAPinGAMS: Multiregional and Small Open Economy Models". Journal of Global Economic Analysis, vol. 1(2):1–77.
Reinert, Kenneth A., and Joseph F. Francois, eds. Applied Methods for Trade Policy Analysis: A Handbook. Cambridge University Press, 1997.
Shoven, John and John Whalley (1984). "Applied General-Equilibrium Models of Taxation and International Trade: An Introduction and Survey". Journal of Economic Literature, vol. 22(3) 1007–51
Shoven, John and John Whalley (1992). Applying General Equilibrium, Cambridge University Press
External links
gEcon – software for DSGE and CGE modeling
General equilibrium theory
Mathematical and quantitative methods (economics) | 0.789238 | 0.980887 | 0.774153 |
Hydrology | Hydrology is the scientific study of the movement, distribution, and management of water on Earth and other planets, including the water cycle, water resources, and drainage basin sustainability. A practitioner of hydrology is called a hydrologist. Hydrologists are scientists studying earth or environmental science, civil or environmental engineering, and physical geography. Using various analytical methods and scientific techniques, they collect and analyze data to help solve water related problems such as environmental preservation, natural disasters, and water management.
Hydrology subdivides into surface water hydrology, groundwater hydrology (hydrogeology), and marine hydrology. Domains of hydrology include hydrometeorology, surface hydrology, hydrogeology, drainage-basin management, and water quality.
Oceanography and meteorology are not included because water is only one of many important aspects within those fields.
Hydrological research can inform environmental engineering, policy, and planning.
Branches
Chemical hydrology is the study of the chemical characteristics of water.
Ecohydrology is the study of interactions between organisms and the hydrologic cycle.
Hydrogeology is the study of the presence and movement of groundwater.
Hydrogeochemistry is the study of how terrestrial water dissolves minerals weathering and this effect on water chemistry.
Hydroinformatics is the adaptation of information technology to hydrology and water resources applications.
Hydrometeorology is the study of the transfer of water and energy between land and water body surfaces and the lower atmosphere.
Isotope hydrology is the study of the isotopic signatures of water.
Surface hydrology is the study of hydrologic processes that operate at or near Earth's surface.
Drainage basin management covers water storage, in the form of reservoirs, and floods protection.
Water quality includes the chemistry of water in rivers and lakes, both of pollutants and natural solutes.
Applications
Calculation of rainfall.
Calculation of Evapotranspiration
Calculating surface runoff and precipitation.
Determining the water balance of a region.
Determining the agricultural water balance.
Designing riparian-zone restoration projects.
Mitigating and predicting flood, landslide and Drought risk.
Real-time flood forecasting, flood warning, Flood Frequency Analysis
Designing irrigation schemes and managing agricultural productivity.
Part of the hazard module in catastrophe modeling.
Providing drinking water.
Designing dams for water supply or hydroelectric power generation.
Designing bridges.
Designing sewers and urban drainage systems.
Analyzing the impacts of antecedent moisture on sanitary sewer systems.
Predicting geomorphologic changes, such as erosion or sedimentation.
Assessing the impacts of natural and anthropogenic environmental change on water resources.
Assessing contaminant transport risk and establishing environmental policy guidelines.
Estimating the water resource potential of river basins.
Water resources management.
Water resources engineering - application of hydrological and hydraulic principles to the planning, development, and management of water resources for beneficial human use. It involves assessing water availability, quality, and demand; designing and operating water infrastructure; and implementing strategies for sustainable water management.
History
Hydrology has been subject to investigation and engineering for millennia. Ancient Egyptians were one of the first to employ hydrology in their engineering and agriculture, inventing a form of water management known as basin irrigation. Mesopotamian towns were protected from flooding with high earthen walls. Aqueducts were built by the Greeks and Romans, while history shows that the Chinese built irrigation and flood control works. The ancient Sinhalese used hydrology to build complex irrigation works in Sri Lanka, also known for the invention of the Valve Pit which allowed construction of large reservoirs, anicuts and canals which still function.
Marcus Vitruvius, in the first century BC, described a philosophical theory of the hydrologic cycle, in which precipitation falling in the mountains infiltrated the Earth's surface and led to streams and springs in the lowlands. With the adoption of a more scientific approach, Leonardo da Vinci and Bernard Palissy independently reached an accurate representation of the hydrologic cycle. It was not until the 17th century that hydrologic variables began to be quantified.
Pioneers of the modern science of hydrology include Pierre Perrault, Edme Mariotte and Edmund Halley. By measuring rainfall, runoff, and drainage area, Perrault showed that rainfall was sufficient to account for the flow of the Seine. Mariotte combined velocity and river cross-section measurements to obtain a discharge value, again in the Seine. Halley showed that the evaporation from the Mediterranean Sea was sufficient to account for the outflow of rivers flowing into the sea.
Advances in the 18th century included the Bernoulli piezometer and Bernoulli's equation, by Daniel Bernoulli, and the Pitot tube, by Henri Pitot. The 19th century saw development in groundwater hydrology, including Darcy's law, the Dupuit-Thiem well formula, and Hagen-Poiseuille's capillary flow equation.
Rational analyses began to replace empiricism in the 20th century, while governmental agencies began their own hydrological research programs. Of particular importance were Leroy Sherman's unit hydrograph, the infiltration theory of Robert E. Horton, and C.V. Theis' aquifer test/equation describing well hydraulics.
Since the 1950s, hydrology has been approached with a more theoretical basis than in the past, facilitated by advances in the physical understanding of hydrological processes and by the advent of computers and especially geographic information systems (GIS). (See also GIS and hydrology)
Themes
The central theme of hydrology is that water circulates throughout the Earth through different pathways and at different rates. The most vivid image of this is in the evaporation of water from the ocean, which forms clouds. These clouds drift over the land and produce rain. The rainwater flows into lakes, rivers, or aquifers. The water in lakes, rivers, and aquifers then either evaporates back to the atmosphere or eventually flows back to the ocean, completing a cycle. Water changes its state of being several times throughout this cycle.
The areas of research within hydrology concern the movement of water between its various states, or within a given state, or simply quantifying the amounts in these states in a given region. Parts of hydrology concern developing methods for directly measuring these flows or amounts of water, while others concern modeling these processes either for scientific knowledge or for making a prediction in practical applications.
Groundwater
Ground water is water beneath Earth's surface, often pumped for drinking water. Groundwater hydrology (hydrogeology) considers quantifying groundwater flow and solute transport. Problems in describing the saturated zone include the characterization of aquifers in terms of flow direction, groundwater pressure and, by inference, groundwater depth (see: aquifer test). Measurements here can be made using a piezometer. Aquifers are also described in terms of hydraulic conductivity, storativity and transmissivity. There are a number of geophysical methods for characterizing aquifers. There are also problems in characterizing the vadose zone (unsaturated zone).
Infiltration
Infiltration is the process by which water enters the soil. Some of the water is absorbed, and the rest percolates down to the water table. The infiltration capacity, the maximum rate at which the soil can absorb water, depends on several factors. The layer that is already saturated provides a resistance that is proportional to its thickness, while that plus the depth of water above the soil provides the driving force (hydraulic head). Dry soil can allow rapid infiltration by capillary action; this force diminishes as the soil becomes wet. Compaction reduces the porosity and the pore sizes. Surface cover increases capacity by retarding runoff, reducing compaction and other processes. Higher temperatures reduce viscosity, increasing infiltration.
Soil moisture
Soil moisture can be measured in various ways; by capacitance probe, time domain reflectometer or tensiometer. Other methods include solute sampling and geophysical methods.
Surface water flow
Hydrology considers quantifying surface water flow and solute transport, although the treatment of flows in large rivers is sometimes considered as a distinct topic of hydraulics or hydrodynamics. Surface water flow can include flow both in recognizable river channels and otherwise. Methods for measuring flow once the water has reached a river include the stream gauge (see: discharge), and tracer techniques. Other topics include chemical transport as part of surface water, sediment transport and erosion.
One of the important areas of hydrology is the interchange between rivers and aquifers. Groundwater/surface water interactions in streams and aquifers can be complex and the direction of net water flux (into surface water or into the aquifer) may vary spatially along a stream channel and over time at any particular location, depending on the relationship between stream stage and groundwater levels.
Precipitation and evaporation
In some considerations, hydrology is thought of as starting at the land-atmosphere boundary and so it is important to have adequate knowledge of both precipitation and evaporation. Precipitation can be measured in various ways: disdrometer for precipitation characteristics at a fine time scale; radar for cloud properties, rain rate estimation, hail and snow detection; rain gauge for routine accurate measurements of rain and snowfall; satellite for rainy area identification, rain rate estimation, land-cover/land-use, and soil moisture, snow cover or snow water equivalent for example.
Evaporation is an important part of the water cycle. It is partly affected by humidity, which can be measured by a sling psychrometer. It is also affected by the presence of snow, hail, and ice and can relate to dew, mist and fog. Hydrology considers evaporation of various forms: from water surfaces; as transpiration
from plant surfaces in natural and agronomic ecosystems. Direct measurement of evaporation can be obtained using Simon's evaporation pan.
Detailed studies of evaporation involve boundary layer considerations as well as momentum, heat flux, and energy budgets.
Remote sensing
Remote sensing of hydrologic processes can provide information on locations where in situ sensors may be unavailable or sparse. It also enables observations over large spatial extents. Many of the variables constituting the terrestrial water balance, for example surface water storage, soil moisture, precipitation, evapotranspiration, and snow and ice, are measurable using remote sensing at various spatial-temporal resolutions and accuracies. Sources of remote sensing include land-based sensors, airborne sensors and satellite sensors which can capture microwave, thermal and near-infrared data or use lidar, for example.
Water quality
In hydrology, studies of water quality concern organic and inorganic compounds, and both dissolved and sediment material. In addition, water quality is affected by the interaction of dissolved oxygen with organic material and various chemical transformations that may take place. Measurements of water quality may involve either in-situ methods, in which analyses take place on-site, often automatically, and laboratory-based analyses and may include microbiological analysis.
Integrating measurement and modelling
Budget analyses
Parameter estimation
Scaling in time and space
Data assimilation
Quality control of data – see for example Double mass analysis
Prediction
Observations of hydrologic processes are used to make predictions of the future behavior of hydrologic systems (water flow, water quality). One of the major current concerns in hydrologic research is "Prediction in Ungauged Basins" (PUB), i.e. in basins where no or only very few data exist.
Statistical hydrology
The aims of Statistical hydrology is to provide appropriate statistical methods for analyzing and modeling various parts of the hydrological cycle. By analyzing the statistical properties of hydrologic records, such as rainfall or river flow, hydrologists can estimate future hydrologic phenomena. When making assessments of how often relatively rare events will occur, analyses are made in terms of the return period of such events. Other quantities of interest include the average flow in a river, in a year or by season.
These estimates are important for engineers and economists so that proper risk analysis can be performed to influence investment decisions in future infrastructure and to determine the yield reliability characteristics of water supply systems. Statistical information is utilized to formulate operating rules for large dams forming part of systems which include agricultural, industrial and residential demands.
Modeling
Hydrological models are simplified, conceptual representations of a part of the hydrologic cycle. They are primarily used for hydrological prediction and for understanding hydrological processes, within the general field of scientific modeling. Two major types of hydrological models can be distinguished:
Models based on data. These models are black box systems, using mathematical and statistical concepts to link a certain input (for instance rainfall) to the model output (for instance runoff). Commonly used techniques are regression, transfer functions, and system identification. The simplest of these models may be linear models, but it is common to deploy non-linear components to represent some general aspects of a catchment's response without going deeply into the real physical processes involved. An example of such an aspect is the well-known behavior that a catchment will respond much more quickly and strongly when it is already wet than when it is dry.
Models based on process descriptions. These models try to represent the physical processes observed in the real world. Typically, such models contain representations of surface runoff, subsurface flow, evapotranspiration, and channel flow, but they can be far more complicated. Within this category, models can be divided into conceptual and deterministic. Conceptual models link simplified representations of the hydrological processes in an area, whereas deterministic models seek to resolve as much of the physics of a system as possible. These models can be subdivided into single-event models and continuous simulation models.
Recent research in hydrological modeling tries to have a more global approach to the understanding of the behavior of hydrologic systems to make better predictions and to face the major challenges in water resources management.
Transport
Water movement is a significant means by which other materials, such as soil, gravel, boulders or pollutants, are transported from place to place. Initial input to receiving waters may arise from a point source discharge or a line source or area source, such as surface runoff. Since the 1960s rather complex mathematical models have been developed, facilitated by the availability of high-speed computers. The most common pollutant classes analyzed are nutrients, pesticides, total dissolved solids and sediment.
Organizations
Intergovernmental organizations
International Hydrological Programme (IHP)
International research bodies
International Water Management Institute (IWMI)
UN-IHE Delft Institute for Water Education
National research bodies
Centre for Ecology and Hydrology – UK
Centre for Water Science, Cranfield University, UK
eawag – aquatic research, ETH Zürich, Switzerland
Institute of Hydrology, Albert-Ludwigs-University of Freiburg, Germany
United States Geological Survey – Water Resources of the United States
NOAA's National Weather Service – Office of Hydrologic Development, US
US Army Corps of Engineers Hydrologic Engineering Center, US
Hydrologic Research Center, US
NOAA Economics and Social Sciences, United States
University of Oklahoma Center for Natural Hazards and Disasters Research, US
National Hydrology Research Centre, Canada
National Institute of Hydrology, India
National and international societies
American Institute of Hydrology (AIH)
Geological Society of America (GSA) – Hydrogeology Division
American Geophysical Union (AGU) – Hydrology Section
National Ground Water Association (NGWA)
American Water Resources Association
Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI)
International Association of Hydrological Sciences (IAHS)
Statistics in Hydrology Working Group (subgroup of IAHS)
German Hydrological Society (DHG: Deutsche Hydrologische Gesellschaft)
Italian Hydrological Society (SII-IHS) – Società Idrologica Italiana
Nordic Association for Hydrology
British Hydrological Society
Russian Geographical Society (Moscow Center) – Hydrology Commission
International Association for Environmental Hydrology
International Association of Hydrogeologists
Society of Hydrologists and Meteorologists – Nepal
Basin- and catchment-wide overviews
Connected Waters Initiative, University of New South Wales – Investigating and raising awareness of groundwater and water resource issues in Australia
Murray Darling Basin Initiative, Department of Environment and Heritage, Australia
Research journals
International Journal of Hydrology Science and Technology
Hydrological Processes, (electronic) 0885-6087 (paper), John Wiley & Sons
Hydrology Research, , IWA Publishing (formerly Nordic Hydrology)
Journal of Hydroinformatics, , IWA Publishing
Journal of Hydrologic Engineering, , ASCE Publication
Journal of Hydrology
Water Research
Water Resources Research
Hydrological Sciences Journal - Journal of the International Association of Hydrological Sciences (IAHS) (Print), (Online)
Hydrology and Earth System Sciences
Journal of Hydrometeorology
See also
Aqueous solution
Climatology
Environmental engineering science
Geological Engineering
Green Kenue – a software tool for hydrologic modellers
Hydraulics
HydroCAD – hydrology and hydraulics modeling software
Hydrography
Hydrology (agriculture)
International Hydrological Programme
Nash–Sutcliffe model efficiency coefficient
Outline of hydrology
Potamal
Socio-hydrology
Soil science
Water distribution on Earth
WEAP (Water Evaluation And Planning) software to model catchment hydrology from climate and land use data
Catchment hydrology
Other water-related fields
Oceanography is the more general study of water in the oceans and estuaries.
Meteorology is the more general study of the atmosphere and of weather, including precipitation as snow and rainfall.
Limnology is the study of lakes, rivers and wetlands ecosystems. It covers the biological, chemical, physical, geological, and other attributes of all inland waters (running and standing waters, both fresh and saline, natural or man-made).
Water resources are sources of water that are useful or potentially useful. Hydrology studies the availability of those resources, but usually not their uses.
References
Further reading
Eslamian, S., 2014, (ed.) Handbook of Engineering Hydrology, Vol. 1: Fundamentals and Applications, Francis and Taylor, CRC Group, 636 Pages, USA.
Eslamian, S., 2014, (ed.) Handbook of Engineering Hydrology, Vol. 2: Modeling, Climate Change and Variability, Francis and Taylor, CRC Group, 646 Pages, USA.
Eslamian, S, 2014, (ed.) Handbook of Engineering Hydrology, Vol. 3: Environmental Hydrology and Water Management, Francis and Taylor, CRC Group, 606 Pages, USA.
External links
Hydrology.nl – Portal to international hydrology and water resources
Decision tree to choose an uncertainty method for hydrological and hydraulic modelling (archived 1 June 2013)
Experimental Hydrology Wiki
Hydraulic engineering
Environmental engineering
Environmental science
Physical geography | 0.777972 | 0.995067 | 0.774134 |
Physical organic chemistry | Physical organic chemistry, a term coined by Louis Hammett in 1940, refers to a discipline of organic chemistry that focuses on the relationship between chemical structures and reactivity, in particular, applying experimental tools of physical chemistry to the study of organic molecules. Specific focal points of study include the rates of organic reactions, the relative chemical stabilities of the starting materials, reactive intermediates, transition states, and products of chemical reactions, and non-covalent aspects of solvation and molecular interactions that influence chemical reactivity. Such studies provide theoretical and practical frameworks to understand how changes in structure in solution or solid-state contexts impact reaction mechanism and rate for each organic reaction of interest.
Application
Physical organic chemists use theoretical and experimental approaches work to understand these foundational problems in organic chemistry, including classical and statistical thermodynamic calculations, quantum mechanical theory and computational chemistry, as well as experimental spectroscopy (e.g., NMR), spectrometry (e.g., MS), and crystallography approaches. The field therefore has applications to a wide variety of more specialized fields, including electro- and photochemistry, polymer and supramolecular chemistry, and bioorganic chemistry, enzymology, and chemical biology, as well as to commercial enterprises involving process chemistry, chemical engineering, materials science and nanotechnology, and pharmacology in drug discovery by design.
Scope
Physical organic chemistry is the study of the relationship between structure and reactivity of organic molecules. More specifically, physical organic chemistry applies the experimental tools of physical chemistry to the study of the structure of organic molecules and provides a theoretical framework that interprets how structure influences both mechanisms and rates of organic reactions. It can be thought of as a subfield that bridges organic chemistry with physical chemistry.
Physical organic chemists use both experimental and theoretical disciplines such as spectroscopy, spectrometry, crystallography, computational chemistry, and quantum theory to study both the rates of organic reactions and the relative chemical stability of the starting materials, transition states, and products. Chemists in this field work to understand the physical underpinnings of modern organic chemistry, and therefore physical organic chemistry has applications in specialized areas including polymer chemistry, supramolecular chemistry, electrochemistry, and photochemistry.
History
The term physical organic chemistry was itself coined by Louis Hammett in 1940 when he used the phrase as a title for his textbook.
Chemical structure and thermodynamics
Thermochemistry
Organic chemists use the tools of thermodynamics to study the bonding, stability, and energetics of chemical systems. This includes experiments to measure or determine the enthalpy (ΔH), entropy (ΔS), and Gibbs' free energy (ΔG) of a reaction, transformation, or isomerization. Chemists may use various chemical and mathematical analyses, such as a Van 't Hoff plot, to calculate these values.
Empirical constants such as bond dissociation energy, standard heat of formation (ΔfH°), and heat of combustion (ΔcH°) are used to predict the stability of molecules and the change in enthalpy (ΔH) through the course of the reactions. For complex molecules, a ΔfH° value may not be available but can be estimated using molecular fragments with known heats of formation. This type of analysis is often referred to as Benson group increment theory, after chemist Sidney Benson who spent a career developing the concept.
The thermochemistry of reactive intermediates—carbocations, carbanions, and radicals—is also of interest to physical organic chemists. Group increment data are available for radical systems. Carbocation and carbanion stabilities can be assessed using hydride ion affinities and pKa values, respectively.
Conformational analysis
One of the primary methods for evaluating chemical stability and energetics is conformational analysis. Physical organic chemists use conformational analysis to evaluate the various types of strain present in a molecule to predict reaction products. Strain can be found in both acyclic and cyclic molecules, manifesting itself in diverse systems as torsional strain, allylic strain, ring strain, and syn-pentane strain. A-values provide a quantitative basis for predicting the conformation of a substituted cyclohexane, an important class of cyclic organic compounds whose reactivity is strongly guided by conformational effects. The A-value is the difference in the Gibbs' free energy between the axial and equatorial forms of substituted cyclohexane, and by adding together the A-values of various substituents it is possible to quantitatively predict the preferred conformation of a cyclohexane derivative.
In addition to molecular stability, conformational analysis is used to predict reaction products. One commonly cited example of the use of conformational analysis is a bi-molecular elimination reaction (E2). This reaction proceeds most readily when the nucleophile attacks the species that is antiperiplanar to the leaving group. A molecular orbital analysis of this phenomenon suggest that this conformation provides the best overlap between the electrons in the R-H σ bonding orbital that is undergoing nucleophilic attack and the empty σ* antibonding orbital of the R-X bond that is being broken. By exploiting this effect, conformational analysis can be used to design molecules that possess enhanced reactivity.
The physical processes which give rise to bond rotation barriers are complex, and these barriers have been extensively studied through experimental and theoretical methods. A number of recent articles have investigated the predominance of the steric, electrostatic, and hyperconjugative contributions to rotational barriers in ethane, butane, and more substituted molecules.
Non-covalent interactions
Chemists use the study of intramolecular and intermolecular non-covalent bonding/interactions in molecules to evaluate reactivity. Such interactions include, but are not limited to, hydrogen bonding, electrostatic interactions between charged molecules, dipole-dipole interactions, polar-π and cation-π interactions, π-stacking, donor-acceptor chemistry, and halogen bonding. In addition, the hydrophobic effect—the association of organic compounds in water—is an electrostatic, non-covalent interaction of interest to chemists. The precise physical origin of the hydrophobic effect originates from many complex interactions, but it is believed to be the most important component of biomolecular recognition in water. For example, researchers elucidated the structural basis for folic acid recognition by folate acid receptor proteins. The strong interaction between folic acid and folate receptor was attributed to both hydrogen bonds and hydrophobic interactions. The study of non-covalent interactions is also used to study binding and cooperativity in supramolecular assemblies and macrocyclic compounds such as crown ethers and cryptands, which can act as hosts to guest molecules.
Acid–base chemistry
The properties of acids and bases are relevant to physical organic chemistry. Organic chemists are primarily concerned with Brønsted–Lowry acids/bases as proton donors/acceptors and Lewis acids/bases as electron acceptors/donors in organic reactions. Chemists use a series of factors developed from physical chemistry -- electronegativity/Induction, bond strengths, resonance, hybridization, aromaticity, and solvation—to predict relative acidities and basicities.
The hard/soft acid/base principle is utilized to predict molecular interactions and reaction direction. In general, interactions between molecules of the same type are preferred. That is, hard acids will associate with hard bases, and soft acids with soft bases. The concept of hard acids and bases is often exploited in the synthesis of inorganic coordination complexes.
Kinetics
Physical organic chemists use the mathematical foundation of chemical kinetics to study the rates of reactions and reaction mechanisms. Unlike thermodynamics, which is concerned with the relative stabilities of the products and reactants (ΔG°) and their equilibrium concentrations, the study of kinetics focuses on the free energy of activation (ΔG‡) -- the difference in free energy between the reactant structure and the transition state structure—of a reaction, and therefore allows a chemist to study the process of equilibration. Mathematically derived formalisms such as the Hammond Postulate, the Curtin-Hammett principle, and the theory of microscopic reversibility are often applied to organic chemistry. Chemists have also used the principle of thermodynamic versus kinetic control to influence reaction products.
Rate laws
The study of chemical kinetics is used to determine the rate law for a reaction. The rate law provides a quantitative relationship between the rate of a chemical reaction and the concentrations or pressures of the chemical species present. Rate laws must be determined by experimental measurement and generally cannot be elucidated from the chemical equation. The experimentally determined rate law refers to the stoichiometry of the transition state structure relative to the ground state structure. Determination of the rate law was historically accomplished by monitoring the concentration of a reactant during a reaction through gravimetric analysis, but today it is almost exclusively done through fast and unambiguous spectroscopic techniques. In most cases, the determination of rate equations is simplified by adding a large excess ("flooding") all but one of the reactants.
Catalysis
The study of catalysis and catalytic reactions is very important to the field of physical organic chemistry. A catalyst participates in the chemical reaction but is not consumed in the process. A catalyst lowers the activation energy barrier (ΔG‡), increasing the rate of a reaction by either stabilizing the transition state structure or destabilizing a key reaction intermediate, and as only a small amount of catalyst is required it can provide economic access to otherwise expensive or difficult to synthesize organic molecules. Catalysts may also influence a reaction rate by changing the mechanism of the reaction.
Kinetic isotope effect
Although a rate law provides the stoichiometry of the transition state structure, it does not provide any information about breaking or forming bonds. The substitution of an isotope near a reactive position often leads to a change in the rate of a reaction. Isotopic substitution changes the potential energy of reaction intermediates and transition states because heavier isotopes form stronger bonds with other atoms. Atomic mass affects the zero-point vibrational state of the associated molecules, shorter and stronger bonds in molecules with heavier isotopes and longer, weaker bonds in molecules with light isotopes. Because vibrational motions will often change during a course of a reaction, due to the making and breaking of bonds, the frequencies will be affected, and the substitution of an isotope can provide insight into the reaction mechanism and rate law.
Substituent effects
The study of how substituents affect the reactivity of a molecule or the rate of reactions is of significant interest to chemists. Substituents can exert an effect through both steric and electronic interactions, the latter of which include resonance and inductive effects. The polarizability of molecule can also be affected. Most substituent effects are analyzed through linear free energy relationships (LFERs). The most common of these is the Hammett Plot Analysis. This analysis compares the effect of various substituents on the ionization of benzoic acid with their impact on diverse chemical systems. The parameters of the Hammett plots are sigma (σ) and rho (ρ). The value of σ indicates the acidity of substituted benzoic acid relative to the unsubstituted form. A positive σ value indicates the compound is more acidic, while a negative value indicates that the substituted version is less acidic. The ρ value is a measure of the sensitivity of the reaction to the change in substituent, but only measures inductive effects. Therefore, two new scales were produced that evaluate the stabilization of localized charge through resonance. One is σ+, which concerns substituents that stabilize positive charges via resonance, and the other is σ− which is for groups that stabilize negative charges via resonance. Hammett analysis can be used to help elucidate the possible mechanisms of a reaction. For example, if it is predicted that the transition state structure has a build-up of negative charge relative to the ground state structure, then electron-donating groups would be expected to increase the rate of the reaction.
Other LFER scales have been developed. Steric and polar effects are analyzed through Taft Parameters. Changing the solvent instead of the reactant can provide insight into changes in charge during the reaction. The Grunwald-Winstein Plot provides quantitative insight into these effects.
Solvent effects
Solvents can have a powerful effect on solubility, stability, and reaction rate. A change in solvent can also allow a chemist to influence the thermodynamic or kinetic control of the reaction. Reactions proceed at different rates in different solvents due to the change in charge distribution during a chemical transformation. Solvent effects may operate on the ground state and/or transition state structures.
An example of the effect of solvent on organic reactions is seen in the comparison of SN1 and SN2 reactions.
Solvent can also have a significant effect on the thermodynamic equilibrium of a system, for instance as in the case of keto-enol tautomerizations. In non-polar aprotic solvents, the enol form is strongly favored due to the formation of an intramolecular hydrogen-bond, while in polar aprotic solvents, such as methylene chloride, the enol form is less favored due to the interaction between the polar solvent and the polar diketone. In protic solvents, the equilibrium lies towards the keto form as the intramolecular hydrogen bond competes with hydrogen bonds originating from the solvent.
A modern example of the study of solvent effects on chemical equilibrium can be seen in a study of the epimerization of chiral cyclopropylnitrile Grignard reagents. This study reports that the equilibrium constant for the cis to trans isomerization of the Grignard reagent is much greater—the preference for the cis form is enhanced—in THF as a reaction solvent, over diethyl ether. However, the faster rate of cis-trans isomerization in THF results in a loss of stereochemical purity. This is a case where understanding the effect of solvent on the stability of the molecular configuration of a reagent is important with regard to the selectivity observed in an asymmetric synthesis.
Quantum chemistry
Many aspects of the structure-reactivity relationship in organic chemistry can be rationalized through resonance, electron pushing, induction, the eight electron rule, and s-p hybridization, but these are only helpful formalisms and do not represent physical reality. Due to these limitations, a true understanding of physical organic chemistry requires a more rigorous approach grounded in particle physics. Quantum chemistry provides a rigorous theoretical framework capable of predicting the properties of molecules through calculation of a molecule's electronic structure, and it has become a readily available tool in physical organic chemists in the form of popular software packages. The power of quantum chemistry is built on the wave model of the atom, in which the nucleus is a very small, positively charged sphere surrounded by a diffuse electron cloud. Particles are defined by their associated wavefunction, an equation which contains all information associated with that particle. All information about the system is contained in the wavefunction. This information is extracted from the wavefunction through the use of mathematical operators.
The energy associated with a particular wavefunction, perhaps the most important information contained in a wavefunction, can be extracted by solving the Schrödinger equation (above, Ψ is the wavefunction, E is the energy, and Ĥ is the Hamiltonian operator) in which an appropriate Hamiltonian operator is applied. In the various forms of the Schrödinger equation, the overall size of a particle's probability distribution increases with decreasing particle mass. For this reason, nuclei are of negligible size in relation to much lighter electrons and are treated as point charges in practical applications of quantum chemistry.
Due to complex interactions which arise from electron-electron repulsion, algebraic solutions of the Schrödinger equation are only possible for systems with one electron such as the hydrogen atom, H2+, H32+, etc.; however, from these simple models arise all the familiar atomic (s,p,d,f) and bonding (σ,π) orbitals. In systems with multiple electrons, an overall multielectron wavefunction describes all of their properties at once. Such wavefunctions are generated through the linear addition of single electron wavefunctions to generate an initial guess, which is repeatedly modified until its associated energy is minimized. Thousands of guesses are often required until a satisfactory solution is found, so such calculations are performed by powerful computers. Importantly, the solutions for atoms with multiple electrons give properties such as diameter and electronegativity which closely mirror experimental data and the patterns found in the periodic table. The solutions for molecules, such as methane, provide exact representations of their electronic structure which are unobtainable by experimental methods. Instead of four discrete σ-bonds from carbon to each hydrogen atom, theory predicts a set of four bonding molecular orbitals which are delocalized across the entire molecule. Similarly, the true electronic structure of 1,3-butadiene shows delocalized π-bonding molecular orbitals stretching through the entire molecule rather than two isolated double bonds as predicted by a simple Lewis structure.
A complete electronic structure offers great predictive power for organic transformations and dynamics, especially in cases concerning aromatic molecules, extended π systems, bonds between metal ions and organic molecules, molecules containing nonstandard heteroatoms like selenium and boron, and the conformational dynamics of large molecules such as proteins wherein the many approximations in chemical formalisms make structure and reactivity prediction impossible. An example of how electronic structure determination is a useful tool for the physical organic chemist is the metal-catalyzed dearomatization of benzene. Chromium tricarbonyl is highly electrophilic due to the withdrawal of electron density from filled chromium d-orbitals into antibonding CO orbitals, and is able to covalently bond to the face of a benzene molecule through delocalized molecular orbitals. The CO ligands inductively draw electron density from benzene through the chromium atom, and dramatically activate benzene to nucleophilic attack. Nucleophiles are then able to react to make hexacyclodienes, which can be used in further transformations such as Diels Alder cycloadditions.
Quantum chemistry can also provide insight into the mechanism of an organic transformation without the collection of any experimental data. Because wavefunctions provide the total energy of a given molecular state, guessed molecular geometries can be optimized to give relaxed molecular structures very similar to those found through experimental methods. Reaction coordinates can then be simulated, and transition state structures solved. Solving a complete energy surface for a given reaction is therefore possible, and such calculations have been applied to many problems in organic chemistry where kinetic data is unavailable or difficult to acquire.
Spectroscopy, spectrometry, and crystallography
Physical organic chemistry often entails the identification of molecular structure, dynamics, and the concentration of reactants in the course of a reaction. The interaction of molecules with light can afford a wealth of data about such properties through nondestructive spectroscopic experiments, with light absorbed when the energy of a photon matches the difference in energy between two states in a molecule and emitted when an excited state in a molecule collapses to a lower energy state. Spectroscopic techniques are broadly classified by the type of excitation being probed, such as vibrational, rotational, electronic, nuclear magnetic resonance (NMR), and electron paramagnetic resonance spectroscopy. In addition to spectroscopic data, structure determination is often aided by complementary data collected from X-Ray diffraction and mass spectrometric experiments.
NMR and EPR spectroscopy
One of the most powerful tools in physical organic chemistry is NMR spectroscopy. An external magnetic field applied to a paramagnetic nucleus generates two discrete states, with positive and negative spin values diverging in energy; the difference in energy can then be probed by determining the frequency of light needed to excite a change in spin state for a given magnetic field. Nuclei that are not indistinguishable in a given molecule absorb at different frequencies, and the integrated peak area in an NMR spectrum is proportional to the number of nuclei responding to that frequency. It is possible to quantify the relative concentration of different organic molecules simply by integration peaks in the spectrum, and many kinetic experiments can be easily and quickly performed by following the progress of a reaction within one NMR sample. Proton NMR is often used by the synthetic organic chemist because protons associated with certain functional groups give characteristic absorption energies, but NMR spectroscopy can also be performed on isotopes of nitrogen, carbon, fluorine, phosphorus, boron, and a host of other elements. In addition to simple absorption experiments, it is also possible to determine the rate of fast atom exchange reactions through suppression exchange measurements, interatomic distances through multidimensional nuclear Overhauser effect experiments, and through-bond spin-spin coupling through homonuclear correlation spectroscopy. In addition to the spin excitation properties of nuclei, it is also possible to study the properties of organic radicals through the same fundamental technique. Unpaired electrons also have a net spin, and an external magnetic field allows for the extraction of similar information through electron paramagnetic resonance (EPR) spectroscopy.
Vibrational spectroscopy
Vibrational spectroscopy, or infrared (IR) spectroscopy, allows for the identification of functional groups and, due to its low expense and robustness, is often used in teaching labs and the real-time monitoring of reaction progress in difficult to reach environments (high pressure, high temperature, gas phase, phase boundaries). Molecular vibrations are quantized in an analogous manner to electronic wavefunctions, with integer increases in frequency leading to higher energy states. The difference in energy between vibrational states is nearly constant, often falling in the energy range corresponding to infrared photons, because at normal temperatures molecular vibrations closely resemble harmonic oscillators. It allows for the crude identification of functional groups in organic molecules, but spectra are complicated by vibrational coupling between nearby functional groups in complex molecules. Therefore, its utility in structure determination is usually limited to simple molecules. Further complicating matters is that some vibrations do not induce a change in the molecular dipole moment and will not be observable with standard IR absorption spectroscopy. These can instead be probed through Raman spectroscopy, but this technique requires a more elaborate apparatus and is less commonly performed. However, as Raman spectroscopy relies on light scattering it can be performed on microscopic samples such as the surface of a heterogeneous catalyst, a phase boundary, or on a one microliter (μL) subsample within a larger liquid volume. The applications of vibrational spectroscopy are often used by astronomers to study the composition of molecular gas clouds, extrasolar planetary atmospheres, and planetary surfaces.
Electronic excitation spectroscopy
Electronic excitation spectroscopy, or ultraviolet-visible (UV-vis) spectroscopy, is performed in the visible and ultraviolet regions of the electromagnetic spectrum and is useful for probing the difference in energy between the highest energy occupied (HOMO) and lowest energy unoccupied (LUMO) molecular orbitals. This information is useful to physical organic chemists in the design of organic photochemical systems and dyes, as absorption of different wavelengths of visible light give organic molecules color. A detailed understanding of an electronic structure is therefore helpful in explaining electronic excitations, and through careful control of molecular structure it is possible to tune the HOMO-LUMO gap to give desired colors and excited state properties.
Mass spectrometry
Mass spectrometry is a technique which allows for the measurement of molecular mass and offers complementary data to spectroscopic techniques for structural identification. In a typical experiment a gas phase sample of an organic material is ionized and the resulting ionic species are accelerated by an applied electric field into a magnetic field. The deflection imparted by the magnetic field, often combined with the time it takes for the molecule to reach a detector, is then used to calculate the mass of the molecule. Often in the course of sample ionization large molecules break apart, and the resulting data show a parent mass and a number of smaller fragment masses; such fragmentation can give rich insight into the sequence of proteins and nucleic acid polymers. In addition to the mass of a molecule and its fragments, the distribution of isotopic variant masses can also be determined and the qualitative presence of certain elements identified due to their characteristic natural isotope distribution. The ratio of fragment mass population to the parent ion population can be compared against a library of empirical fragmentation data and matched to a known molecular structure. Combined gas chromatography and mass spectrometry is used to qualitatively identify molecules and quantitatively measure concentration with great precision and accuracy, and is widely used to test for small quantities of biomolecules and illicit narcotics in blood samples. For synthetic organic chemists it is a useful tool for the characterization of new compounds and reaction products.
Crystallography
Unlike spectroscopic methods, X-ray crystallography always allows for unambiguous structure determination and provides precise bond angles and lengths totally unavailable through spectroscopy. It is often used in physical organic chemistry to provide an absolute molecular configuration and is an important tool in improving the synthesis of a pure enantiomeric substance. It is also the only way to identify the position and bonding of elements that lack an NMR active nucleus such as oxygen. Indeed, before x-ray structural determination methods were made available in the early 20th century all organic structures were entirely conjectural: tetrahedral carbon, for example, was only confirmed by the crystal structure of diamond, and the delocalized structure of benzene was confirmed by the crystal structure of hexamethylbenzene. While crystallography provides organic chemists with highly satisfying data, it is not an everyday technique in organic chemistry because a perfect single crystal of a target compound must be grown. Only complex molecules, for which NMR data cannot be unambiguously interpreted, require this technique. In the example below, the structure of the host–guest complex would have been quite difficult to solve without a single crystal structure: there are no protons on the fullerene, and with no covalent bonds between the two halves of the organic complex spectroscopy alone was unable to prove the hypothesized structure.
See also
Journal of Physical Organic Chemistry
Gaussian, an example of a commercially available quantum mechanical software package used. particularly, in academic settings
References
Further reading
General
Peter Atkins & Julio de Paula, 2006, "Physical chemistry," 8th Edn., New York, NY, USA:Macmillan, , accessed 21 June 2015. [E.g., see p. 422 for a group theoretical/symmetry description of atomic orbitals contributing to bonding in methane, CH4, and pp. 390f for estimation of π-electron binding energy for 1,3-butadiene by the Hückel method.]
Thomas H. Lowry & Kathleen Schueller Richardson, 1987, Mechanism and Theory in Organic Chemistry, 3rd Edn., New York, NY, USA:Harper & Row, , accessed 20 June 2015. [The authoritative textbook on the subject, containing a number of appendices that provide technical details on molecular orbital theory, kinetic isotope effects, transition state theory, and radical chemistry.]
Eric V. Anslyn & Dennis A. Dougherty, 2006, Modern Physical Organic Chemistry, Sausalito, Calif.: University Science Books, . [A modernized and streamlined treatment with an emphasis on applications and cross-disciplinary connections.]
Michael B. Smith & Jerry March, 2007, "March's Advanced Organic Chemistry: Reactions, Mechanisms, and Structure," 6th Ed., New York, NY, USA:Wiley & Sons, , accessed 19 June 2015.
Francis A. Carey & Richard J. Sundberg, 2006, "Advanced Organic Chemistry: Part A: Structure and Mechanisms," 4th Edn., New York, NY, USA:Springer Science & Business Media, , accessed 19 June 2015.
Hammett, Louis P. (1940) Physical Organic Chemistry, New York, NY, USA: McGraw Hill, accessed 20 June 2015.
History
[An outstanding starting point on the history of the field, from a critically important contributor, referencing and discussing the early Hammett text, etc.]
Thermochemistry
L. K. Doraiswamy, 2005, "Estimation of properties of organic compounds (Ch. 3)," pp. 36–51, 118-124 (refs.), in Organic Synthesis Engineering, Oxford, Oxon, ENG:Oxford University Press, , accessed 22 June 2015. (This book chapter surveys a very wide range of physical properties and their estimation, including the narrow list of thermochemical properties appearing in the June 2015 WP article, placing the Benson et al. method alongside many other methods. L. K. Doraiswamy is Anson Marston Distinguished Professor of Engineering at Iowa State University.)
Organic chemistry | 0.803594 | 0.963336 | 0.774131 |
Phase diagram | A phase diagram in physical chemistry, engineering, mineralogy, and materials science is a type of chart used to show conditions (pressure, temperature, etc.) at which thermodynamically distinct phases (such as solid, liquid or gaseous states) occur and coexist at equilibrium.
Overview
Common components of a phase diagram are lines of equilibrium or phase boundaries, which refer to lines that mark conditions under which multiple phases can coexist at equilibrium. Phase transitions occur along lines of equilibrium. Metastable phases are not shown in phase diagrams as, despite their common occurrence, they are not equilibrium phases.
Triple points are points on phase diagrams where lines of equilibrium intersect. Triple points mark conditions at which three different phases can coexist. For example, the water phase diagram has a triple point corresponding to the single temperature and pressure at which solid, liquid, and gaseous water can coexist in a stable equilibrium ( and a partial vapor pressure of ). The pressure on a pressure-temperature diagram (such as the water phase diagram shown) is the partial pressure of the substance in question.
The solidus is the temperature below which the substance is stable in the solid state. The liquidus is the temperature above which the substance is stable in a liquid state. There may be a gap between the solidus and liquidus; within the gap, the substance consists of a mixture of crystals and liquid (like a "slurry").
Working fluids are often categorized on the basis of the shape of their phase diagram.
Types
2-dimensional diagrams
Pressure vs temperature
The simplest phase diagrams are pressure–temperature diagrams of a single simple substance, such as water. The axes correspond to the pressure and temperature. The phase diagram shows, in pressure–temperature space, the lines of equilibrium or phase boundaries between the three phases of solid, liquid, and gas.
The curves on the phase diagram show the points where the free energy (and other derived properties) becomes non-analytic: their derivatives with respect to the coordinates (temperature and pressure in this example) change discontinuously (abruptly). For example, the heat capacity of a container filled with ice will change abruptly as the container is heated past the melting point. The open spaces, where the free energy is analytic, correspond to single phase regions. Single phase regions are separated by lines of non-analytical behavior, where phase transitions occur, which are called phase boundaries.
In the diagram on the right, the phase boundary between liquid and gas does not continue indefinitely. Instead, it terminates at a point on the phase diagram called the critical point. This reflects the fact that, at extremely high temperatures and pressures, the liquid and gaseous phases become indistinguishable, in what is known as a supercritical fluid. In water, the critical point occurs at around Tc = , pc = and ρc = 356 kg/m3.
The existence of the liquid–gas critical point reveals a slight ambiguity in labelling the single phase regions. When going from the liquid to the gaseous phase, one usually crosses the phase boundary, but it is possible to choose a path that never crosses the boundary by going to the right of the critical point. Thus, the liquid and gaseous phases can blend continuously into each other. The solid–liquid phase boundary can only end in a critical point if the solid and liquid phases have the same symmetry group.
For most substances, the solid–liquid phase boundary (or fusion curve) in the phase diagram has a positive slope so that the melting point increases with pressure. This is true whenever the solid phase is denser than the liquid phase. The greater the pressure on a given substance, the closer together the molecules of the substance are brought to each other, which increases the effect of the substance's intermolecular forces. Thus, the substance requires a higher temperature for its molecules to have enough energy to break out of the fixed pattern of the solid phase and enter the liquid phase. A similar concept applies to liquid–gas phase changes.
Water is an exception which has a solid-liquid boundary with negative slope so that the melting point decreases with pressure. This occurs because ice (solid water) is less dense than liquid water, as shown by the fact that ice floats on water. At a molecular level, ice is less dense because it has a more extensive network of hydrogen bonding which requires a greater separation of water molecules. Other exceptions include antimony and bismuth.
At very high pressures above 50 GPa (500 000 atm), liquid nitrogen undergoes a liquid-liquid phase transition to a polymeric form and becomes denser than solid nitrogen at the same pressure. Under these conditions therefore, solid nitrogen also floats in its liquid.
The value of the slope dP/dT is given by the Clausius–Clapeyron equation for fusion (melting)
where ΔHfus is the heat of fusion which is always positive, and ΔVfus is the volume change for fusion. For most substances ΔVfus is positive so that the slope is positive. However for water and other exceptions, ΔVfus is negative so that the slope is negative.
Other thermodynamic properties
In addition to temperature and pressure, other thermodynamic properties may be graphed in phase diagrams. Examples of such thermodynamic properties include specific volume, specific enthalpy, or specific entropy. For example, single-component graphs of temperature vs. specific entropy (T vs. s) for water/steam or for a refrigerant are commonly used to illustrate thermodynamic cycles such as a Carnot cycle, Rankine cycle, or vapor-compression refrigeration cycle.
Any two thermodynamic quantities may be shown on the horizontal and vertical axes of a two-dimensional diagram. Additional thermodynamic quantities may each be illustrated in increments as a series of lines—curved, straight, or a combination of curved and straight. Each of these iso-lines represents the thermodynamic quantity at a certain constant value.
3-dimensional diagrams
It is possible to envision three-dimensional (3D) graphs showing three thermodynamic quantities. For example, for a single component, a 3D Cartesian coordinate type graph can show temperature (T) on one axis, pressure (p) on a second axis, and specific volume (v) on a third. Such a 3D graph is sometimes called a p–v–T diagram. The equilibrium conditions are shown as curves on a curved surface in 3D with areas for solid, liquid, and vapor phases and areas where solid and liquid, solid and vapor, or liquid and vapor coexist in equilibrium. A line on the surface called a triple line is where solid, liquid and vapor can all coexist in equilibrium. The critical point remains a point on the surface even on a 3D phase diagram.
An orthographic projection of the 3D p–v–T graph showing pressure and temperature as the vertical and horizontal axes collapses the 3D plot into the standard 2D pressure–temperature diagram. When this is done, the solid–vapor, solid–liquid, and liquid–vapor surfaces collapse into three corresponding curved lines meeting at the triple point, which is the collapsed orthographic projection of the triple line.
Binary mixtures
Other much more complex types of phase diagrams can be constructed, particularly when more than one pure component is present. In that case, concentration becomes an important variable. Phase diagrams with more than two dimensions can be constructed that show the effect of more than two variables on the phase of a substance. Phase diagrams can use other variables in addition to or in place of temperature, pressure and composition, for example the strength of an applied electrical or magnetic field, and they can also involve substances that take on more than just three states of matter.
One type of phase diagram plots temperature against the relative concentrations of two substances in a binary mixture called a binary phase diagram, as shown at right. Such a mixture can be either a solid solution, eutectic or peritectic, among others. These two types of mixtures result in very different graphs. Another type of binary phase diagram is a boiling-point diagram for a mixture of two components, i. e. chemical compounds. For two particular volatile components at a certain pressure such as atmospheric pressure, a boiling-point diagram shows what vapor (gas) compositions are in equilibrium with given liquid compositions depending on temperature. In a typical binary boiling-point diagram, temperature is plotted on a vertical axis and mixture composition on a horizontal axis.
A two component diagram with components A and B in an "ideal" solution is shown. The construction of a liquid vapor phase diagram assumes an ideal liquid solution obeying Raoult's law and an ideal gas mixture obeying Dalton's law of partial pressure. A tie line from the liquid to the gas at constant pressure would indicate the two compositions of the liquid and gas respectively.
A simple example diagram with hypothetical components 1 and 2 in a non-azeotropic mixture is shown at right. The fact that there are two separate curved lines joining the boiling points of the pure components means that the vapor composition is usually not the same as the liquid composition the vapor is in equilibrium with. See Vapor–liquid equilibrium for more information.
In addition to the above-mentioned types of phase diagrams, there are many other possible combinations. Some of the major features of phase diagrams include congruent points, where a solid phase transforms directly into a liquid. There is also the peritectoid, a point where two solid phases combine into one solid phase during cooling. The inverse of this, when one solid phase transforms into two solid phases during cooling, is called the eutectoid.
A complex phase diagram of great technological importance is that of the iron–carbon system for less than 7% carbon (see steel).
The x-axis of such a diagram represents the concentration variable of the mixture. As the mixtures are typically far from dilute and their density as a function of temperature is usually unknown, the preferred concentration measure is mole fraction. A volume-based measure like molarity would be inadvisable.
Ternary phase diagrams
A system with three components is called a ternary system. At constant pressure the maximum number of independent variables is three – the temperature and two concentration values. For a representation of ternary equilibria a three-dimensional phase diagram is required. Often such a diagram is drawn with the composition as a horizontal plane and the temperature on an axis perpendicular to this plane. To represent composition in a ternary system an equilateral triangle is used, called Gibbs triangle (see also Ternary plot).
The temperature scale is plotted on the axis perpendicular to the composition triangle. Thus, the space model of a ternary phase diagram is a right-triangular prism. The prism sides represent corresponding binary systems A-B, B-C, A-C.
However, the most common methods to present phase equilibria in a ternary system are the following:
1) projections on the concentration triangle ABC of the liquidus, solidus, solvus surfaces;
2) isothermal sections;
3) vertical sections.
Crystals
Polymorphic and polyamorphic substances have multiple crystal or amorphous phases, which can be graphed in a similar fashion to solid, liquid, and gas phases.
Mesophases
Some organic materials pass through intermediate states between solid and liquid; these states are called mesophases. Attention has been directed to mesophases because they enable display devices and have become commercially important through the so-called liquid-crystal technology. Phase diagrams are used to describe the occurrence of mesophases.
See also
CALPHAD (method)
Computational thermodynamics
Congruent melting and incongruent melting
Gibbs phase rule
Glass databases
Hamiltonian mechanics
Phase separation
Saturation dome
Schreinemaker's analysis
Simple phase envelope algorithm
References
External links
Iron-Iron Carbide Phase Diagram Example
How to build a phase diagram
Phase Changes: Phase Diagrams: Part 1
Equilibrium Fe-C phase diagram
Phase diagrams for lead free solders
DoITPoMS Phase Diagram Library
DoITPoMS Teaching and Learning Package – "Phase Diagrams and Solidification"
Phase Diagrams: The Beginning of Wisdom – Open Access Journal Article
Binodal curves, tie-lines, lever rule and invariant points – How to read phase diagrams (Video by SciFox on TIB AV-Portal)
The Alloy Phase Diagram International Commission (APDIC)
Periodic table of phase diagrams of the elements (pdf poster)
Diagram
Equilibrium chemistry
Materials science
Metallurgy
Charts
Diagrams
Gases
Chemical engineering thermodynamics | 0.777291 | 0.995915 | 0.774116 |
Structure | A structure is an arrangement and organization of interrelated elements in a material object or system, or the object or system so organized. Material structures include man-made objects such as buildings and machines and natural objects such as biological organisms, minerals and chemicals. Abstract structures include data structures in computer science and musical form. Types of structure include a hierarchy (a cascade of one-to-many relationships), a network featuring many-to-many links, or a lattice featuring connections between components that are neighbors in space.
Load-bearing
Buildings, aircraft, skeletons, anthills, beaver dams, bridges and salt domes are all examples of load-bearing structures. The results of construction are divided into buildings and non-building structures, and make up the infrastructure of a human society. Built structures are broadly divided by their varying design approaches and standards, into categories including building structures, architectural structures, civil engineering structures and mechanical structures.
The effects of loads on physical structures are determined through structural analysis, which is one of the tasks of structural engineering. The structural elements can be classified as one-dimensional (ropes, struts, beams, arches), two-dimensional (membranes, plates, slab, shells, vaults), or three-dimensional (solid masses). Three-dimensional elements were the main option available to early structures such as Chichen Itza. A one-dimensional element has one dimension much larger than the other two, so the other dimensions can be neglected in calculations; however, the ratio of the smaller dimensions and the composition can determine the flexural and compressive stiffness of the element. Two-dimensional elements with a thin third dimension have little of either but can resist biaxial traction.
The structure elements are combined in structural systems. The majority of everyday load-bearing structures are section-active structures like frames, which are primarily composed of one-dimensional (bending) structures. Other types are Vector-active structures such as trusses, surface-active structures such as shells and folded plates, form-active structures such as cable or membrane structures, and hybrid structures.
Load-bearing biological structures such as bones, teeth, shells, and tendons derive their strength from a multilevel hierarchy of structures employing biominerals and proteins, at the bottom of which are collagen fibrils.
Biological
In biology, one of the properties of life is its highly ordered structure, which can be observed at multiple levels such as in cells, tissues, organs, and organisms.
In another context, structure can also observed in macromolecules, particularly proteins and nucleic acids. The function of these molecules is determined by their shape as well as their composition, and their structure has multiple levels. Protein structure has a four-level hierarchy. The primary structure is the sequence of amino acids that make it up. It has a peptide backbone made up of a repeated sequence of a nitrogen and two carbon atoms. The secondary structure consists of repeated patterns determined by hydrogen bonding. The two basic types are the α-helix and the β-pleated sheet. The tertiary structure is a back and forth bending of the polypeptide chain, and the quaternary structure is the way that tertiary units come together and interact. Structural biology is concerned with biomolecular structure of macromolecules.
Chemical
Chemical structure refers to both molecular geometry and electronic structure. The structure can be represented by a variety of diagrams called structural formulas. Lewis structures use a dot notation to represent the valence electrons for an atom; these are the electrons that determine the role of the atom in chemical reactions. Bonds between atoms can be represented by lines with one line for each pair of electrons that is shared. In a simplified version of such a diagram, called a skeletal formula, only carbon-carbon bonds and functional groups are shown.
Atoms in a crystal have a structure that involves repetition of a basic unit called a unit cell. The atoms can be modeled as points on a lattice, and one can explore the effect of symmetry operations that include rotations about a point, reflections about a symmetry planes, and translations (movements of all the points by the same amount). Each crystal has a finite group, called the space group, of such operations that map it onto itself; there are 230 possible space groups. By Neumann's law, the symmetry of a crystal determines what physical properties, including piezoelectricity and ferromagnetism, the crystal can have.
Mathematical
Musical
A large part of numerical analysis involves identifying and interpreting the structure of musical works. Structure can be found at the level of part of a work, the entire work, or a group of works. Elements of music such as pitch, duration and timbre combine into small elements like motifs and phrases, and these in turn combine in larger structures. Not all music (for example, that of John Cage) has a hierarchical organization, but hierarchy makes it easier for a listener to understand and remember the music.
In analogy to linguistic terminology, motifs and phrases can be combined to make complete musical ideas such as sentences and phrases. A larger form is known as the period. One such form that was widely used between 1600 and 1900 has two phrases, an antecedent and a consequent, with a half cadence in the middle and a full cadence at the end providing punctuation. On a larger scale are single-movement forms such as the sonata form and the contrapuntal form, and multi-movement forms such as the symphony.
Social
A social structure is a pattern of relationships. They are social organizations of individuals in various life situations. Structures are applicable to people in how a society is as a system organized by a characteristic pattern of relationships. This is known as the social organization of the group. Sociologists have studied the changing structure of these groups. Structure and agency are two confronted theories about human behaviour. The debate surrounding the influence of structure and agency on human thought is one of the central issues in sociology. In this context, agency refers to the individual human capacity to act independently and make free choices. Structure here refers to factors such as social class, religion, gender, ethnicity, customs, etc. that seem to limit or influence individual opportunities.
Data
In computer science, a data structure is a way of organizing information in a computer so that it can be used efficiently. Data structures are built out of two basic types: An array has an index that can be used for immediate access to any data item (some programming languages require array size to be initialized). A linked list can be reorganized, grown or shrunk, but its elements must be accessed with a pointer that links them together in a particular order. Out of these any number of other data structures can be created such as stacks, queues, trees and hash tables.
In solving a problem, a data structure is generally an integral part of the algorithm. In modern programming style, algorithms and data structures are encapsulated together in an abstract data type.
Software
Software architecture is the specific choices made between possible alternatives within a framework. For example, a framework might require a database and the architecture would specify the type and manufacturer of the database. The structure of software is the way in which it is partitioned into interrelated components. A key structural issue is minimizing dependencies between these components. This makes it possible to change one component without requiring changes in others. The purpose of structure is to optimise for (brevity, readability, traceability, isolation and encapsulation, maintainability, extensibility, performance and efficiency), examples being: language choice, code, functions, libraries, builds, system evolution, or diagrams for flow logic and design. Structural elements reflect the requirements of the application: for example, if the system requires a high fault tolerance, then a redundant structure is needed so that if a component fails it has backups. A high redundancy is an essential part of the design of several systems in the Space Shuttle.
Logical
As a branch of philosophy, logic is concerned with distinguishing good arguments from poor ones. A chief concern is with the structure of arguments. An argument consists of one or more premises from which a conclusion is inferred. The steps in this inference can be expressed in a formal way and their structure analyzed. Two basic types of inference are deduction and induction. In a valid deduction, the conclusion necessarily follows from the premises, regardless of whether they are true or not. An invalid deduction contains some error in the analysis. An inductive argument claims that if the premises are true, the conclusion is likely.
See also
Abstract structure
Mathematical structure
Structural geology
Structure (mathematical logic)
Structuralism (philosophy of science)
References
Further reading
External links
(syllabus and reading list) | 0.777782 | 0.995282 | 0.774112 |
Atomic physics | Atomic physics is the field of physics that studies atoms as an isolated system of electrons and an atomic nucleus. Atomic physics typically refers to the study of atomic structure and the interaction between atoms. It is primarily concerned with the way in which electrons are arranged around the nucleus and
the processes by which these arrangements change. This comprises ions, neutral atoms and, unless otherwise stated, it can be assumed that the term atom includes ions.
The term atomic physics can be associated with nuclear power and nuclear weapons, due to the synonymous use of atomic and nuclear in standard English. Physicists distinguish between atomic physics—which deals with the atom as a system consisting of a nucleus and electrons—and nuclear physics, which studies nuclear reactions and special properties of atomic nuclei.
As with many scientific fields, strict delineation can be highly contrived and atomic physics is often considered in the wider context of atomic, molecular, and optical physics. Physics research groups are usually so classified.
Isolated atoms
Atomic physics primarily considers atoms in isolation. Atomic models will consist of a single nucleus that may be surrounded by one or more bound electrons. It is not concerned with the formation of molecules (although much of the physics is identical), nor does it examine atoms in a solid state as condensed matter. It is concerned with processes such as ionization and excitation by photons or collisions with atomic particles.
While modelling atoms in isolation may not seem realistic, if one considers atoms in a gas or plasma then the time-scales for atom-atom interactions are huge in comparison to the atomic processes that are generally considered. This means that the individual atoms can be treated as if each were in isolation, as the vast majority of the time they are. By this consideration, atomic physics provides the underlying theory in plasma physics and atmospheric physics, even though both deal with very large numbers of atoms.
Electronic configuration
Electrons form notional shells around the nucleus. These are normally in a ground state but can be excited by the absorption of energy from light (photons), magnetic fields, or interaction with a colliding particle (typically ions or other electrons).
Electrons that populate a shell are said to be in a bound state. The energy necessary to remove an electron from its shell (taking it to infinity) is called the binding energy. Any quantity of energy absorbed by the electron in excess of this amount is converted to kinetic energy according to the conservation of energy. The atom is said to have undergone the process of ionization.
If the electron absorbs a quantity of energy less than the binding energy, it will be transferred to an excited state. After a certain time, the electron in an excited state will "jump" (undergo a transition) to a lower state. In a neutral atom, the system will emit a photon of the difference in energy, since energy is conserved.
If an inner electron has absorbed more than the binding energy (so that the atom ionizes), then a more outer electron may undergo a transition to fill the inner orbital. In this case, a visible photon or a characteristic X-ray is emitted, or a phenomenon known as the Auger effect may take place, where the released energy is transferred to another bound electron, causing it to go into the continuum. The Auger effect allows one to multiply ionize an atom with a single photon.
There are rather strict selection rules as to the electronic configurations that can be reached by excitation by light — however, there are no such rules for excitation by collision processes.
History and developments
One of the earliest steps towards atomic physics was the recognition that matter was composed
of atoms. It forms a part of the texts written in 6th century BC to 2nd century BC, such as those of Democritus or written by . This theory was later developed in the modern sense of the basic unit of a chemical element by the British chemist and physicist John Dalton in the 18th century. At this stage, it wasn't clear what atoms were, although they could be described and classified by their properties (in bulk). The invention of the periodic system of elements by Dmitri Mendeleev was another great step forward.
The true beginning of atomic physics is marked by the discovery of spectral lines and attempts to describe the phenomenon, most notably by Joseph von Fraunhofer. The study of these lines led to the Bohr atom model and to the birth of quantum mechanics. In seeking to explain atomic spectra, an entirely new mathematical model of matter was revealed. As far as atoms and their electron shells were concerned, not only did this yield a better overall description, i.e. the atomic orbital model, but it also provided a new theoretical basis for chemistry
(quantum chemistry) and spectroscopy.
Since the Second World War, both theoretical and experimental fields have advanced at a rapid pace. This can be attributed to progress in computing technology, which has allowed larger and more sophisticated models of atomic structure and associated collision processes. Similar technological advances in accelerators, detectors, magnetic field generation and lasers have greatly assisted experimental work.
Significant atomic physicists
See also
Particle physics
Isomeric shift
Atomism
Bibliography
References
External links
MIT-Harvard Center for Ultracold Atoms
Stanford QFARM Initiative for Quantum Science & Enginneering
Joint Quantum Institute at University of Maryland and NIST
Atomic Physics on the Internet
JILA (Atomic Physics)
ORNL Physics Division
Atomic, molecular, and optical physics | 0.781484 | 0.990462 | 0.774031 |
Synergetics (Fuller) | Synergetics is the empirical study of systems in transformation, with an emphasis on whole system behaviors unpredicted by the behavior of any components in isolation. R. Buckminster Fuller (1895–1983) named and pioneered the field. His two-volume work Synergetics: Explorations in the Geometry of Thinking, in collaboration with E. J. Applewhite, distills a lifetime of research into book form.
Since systems are identifiable at every scale, synergetics is necessarily interdisciplinary, embracing a broad range of scientific and philosophical topics, especially in the area of geometry, wherein the tetrahedron features as Fuller's model of the simplest system.
Despite mainstream endorsements such as the prologue by Arthur Loeb, and positive dust cover blurbs by U Thant and Arthur C. Clarke, along with the posthumous naming of the carbon allotrope "buckminsterfullerene", synergetics remains an off-beat subject, ignored for decades by most traditional curricula and academic departments, a fact Fuller himself considered evidence of a dangerous level of overspecialization.
His oeuvre inspired many developers to further pioneer offshoots from synergetics, especially geodesic dome and dwelling designs. Among Fuller's contemporaries were Joe Clinton (NASA), Don Richter (Temcor), Kenneth Snelson (tensegrity), J. Baldwin (New Alchemy Institute), and Medard Gabel (World Game). His chief assistants Amy Edmondson and Ed Popko have published primers that help popularize synergetics, Stafford Beer extended synergetics to applications in social dynamics, and J.F. Nystrom proposed a theory of computational cosmography. Research continues.
Definition
Fuller defined synergetics as follows:
A system of mensuration employing 60-degree vectorial coordination comprehensive to both physics and chemistry, and to both arithmetic and geometry, in rational whole numbers ... Synergetics explains much that has not been previously illuminated ... Synergetics follows the cosmic logic of the structural mathematics strategies of nature, which employ the paired sets of the six angular degrees of freedom, frequencies, and vectorially economical actions and their multi-alternative, equi-economical action options ... Synergetics discloses the excruciating awkwardness characterizing present-day mathematical treatment of the interrelationships of the independent scientific disciplines as originally occasioned by their mutual and separate lacks of awareness of the existence of a comprehensive, rational, coordinating system inherent in nature.
Other passages in Synergetics that outline the subject are its introduction (The Wellspring of Reality) and the section on Nature's Coordination (410.01). The chapter on Operational Mathematics (801.00-842.07) provides an easy-to-follow, easy-to-build introduction to some of Fuller's geometrical modeling techniques. So this chapter can help a new reader become familiar with Fuller's approach, style and geometry. One of Fuller's clearest expositions on "the geometry of thinking" occurs in the two-part essay "Omnidirectional Halo" which appears in his book No More Secondhand God.
Amy Edmondson describes synergetics "in the broadest terms, as the study of spatial complexity, and as such is an inherently comprehensive discipline." In her PhD study, Cheryl Clark synthesizes the scope of synergetics as "the study of how nature works, of the patterns inherent in nature, the geometry of environmental forces that impact on humanity."
Here's an abridged list of some of the discoveries Fuller claims for Synergetics again quoting directly:
The rational volumetric quantation or constant proportionality of the octahedron, the cube, the rhombic triacontahedron, and the rhombic dodecahedron when referenced to the tetrahedron as volumetric unity.
The trigonometric identification of the great-circle trajectories of the seven axes of symmetry with the 120 basic disequilibrium LCD triangles of the spherical icosahedron. (See Sec. 1043.00.)
The A and B Quanta Modules.
Omnirationality: the identification of triangling and tetrahedroning with second- and third-powering factors.
Omni-60-degree coordination versus 90-degree coordination.
The integration of geometry and philosophy in a single conceptual system providing a common language and accounting for both the physical and metaphysical.
Significance
Several authors have tried to characterize the importance of synergetics. Amy Edmonson asserts that "Experience with synergetics encourages a new way of approaching and solving problems. Its emphasis on visual and spatial phenomena combined with Fuller's holistic approach fosters the kind of lateral thinking which so often leads to creative breakthroughs.". Cheryl Clark points out that "In his thousands of lectures, Fuller urged his audiences to study synergetics, saying 'I am confident that humanity's survival depends on all of our willingness to comprehend feelingly the way nature works.'"
Tetrahedral accounting
A chief hallmark of this system of mensuration is its unit of volume: a tetrahedron defined by four closest-packed unit-radius spheres. This tetrahedron anchors a set of concentrically arranged polyhedra proportioned in a canonical manner and inter-connected by a twisting-contracting, inside-outing dynamic that Fuller named the jitterbug transformation.
Corresponding to Fuller's use of a regular tetrahedron as his unit of volume is his replacing the cube as his model of 3rd powering.(Fig. 990.01) The relative size of a shape is indexed by its "frequency," a term he deliberately chose for its resonance with scientific meanings. "Size and time are synonymous. Frequency and size are the same phenomenon." (528.00) Shapes not having any size, because purely conceptual in the Platonic sense, are "prefrequency" or "subfrequency" in contrast.
Prime means sizeless, timeless, subfrequency. Prime is prehierarchical. Prime is prefrequency. Prime is generalized, a metaphysical conceptualization experience, not a special case.... (1071.10)
Generalized principles (scientific laws), although communicated energetically, do not inhere in the "special case" episodes, are considered "metaphysical" in that sense.
An energy event is always special case. Whenever we have experienced energy, we have special case. The physicist's first definition of physical is that it is an experience that is extracorporeally, remotely, instrumentally apprehensible. Metaphysical includes all the experiences that are excluded by the definition of physical. Metaphysical is always generalized principle.(1075.11)
Tetrahedral mensuration also involves substituting what Fuller calls the "isotropic vector matrix" (IVM) for the standard XYZ coordinate system, as his principal conceptual backdrop for special case physicality:
The synergetics coordinate system -- in contradistinction to the XYZ coordinate system -- is linearly referenced to the unit-vector-length edges of the regular tetrahedron, each of whose six unit vector edges occur in the isotropic vector matrix as the diagonals of the cube's six faces. (986.203)
The IVM scaffolding or skeletal framework is defined by cubic closest packed spheres (CCP), alternatively known as the FCC or face-centered cubic lattice, or as the octet truss in architecture (on which Fuller held a patent). The space-filling complementary tetrahedra and octahedra characterizing this matrix have prefrequency volumes 1 and 4 respectively (see above).
A third consequence of switching to tetrahedral mensuration is Fuller's review of the standard "dimension" concept. Whereas "height, width and depth" have been promulgated as three distinct dimensions within the Euclidean context, each with its own independence, Fuller considered the tetrahedron a minimal starting point for spatial cognition. His use of "4D" is in many passages close to synonymous with the ordinary meaning of "3D," with the dimensions of physicality (time, mass) considered additional dimensions.
Geometers and "schooled" people speak of length, breadth, and height as constituting a hierarchy of three independent dimensional states -- "one-dimensional," "two-dimensional," and "three-dimensional" -- which can be conjoined like building blocks. But length, breadth, and height simply do not exist independently of one another nor independently of all the inherent characteristics of all systems and of all systems' inherent complex of interrelationships with Scenario Universe.... All conceptual consideration is inherently four-dimensional. Thus the primitive is a priori four-dimensional, always based on the four planes of reference of the tetrahedron. There can never be less than four primitive dimensions. Any one of the stars or point-to-able "points" is a system-ultratunable, tunable, or infratunable but inherently four-dimensional. (527.702, 527.712)
Synergetics does not aim to replace or invalidate pre-existing geometry or mathematics, as evidenced by the opening dedication to H.S.M. Coxeter, whom Fuller considered the greatest geometer of his era. Fuller acknowledges his vocabulary is "remote" even while defending his word choices. (250.30)
Starting with Universe
Fuller's geometric explorations provide an experiential basis for designing and refining a philosophical language. His overarching concern is the co-occurring relationship between tensile and compressive tendencies within an eternally regenerative Universe. "Universe" is a proper name he defines in terms of "partially overlapping scenarios" while avoiding any static picture or model of same. His Universe is "non-simultaneously conceptual":
Because of the fundamental nonsimultaneity of universal structuring, a single, simultaneous, static model of Universe is inherently both nonexistent and conceptually impossible as well as unnecessary. Ergo, Universe does not have a shape. Do not waste your time, as man has been doing for ages, trying to think of a unit shape "outside of which there must be something," or "within which, at center, there must be a smaller something." (307.04)
U = MP describes a first division of Universe into metaphysical and physical aspects, the former associated with invisibly cohesive tension, the latter with energy events, both associative as matter and disassociative as radiation. (162.00)
Synergetics also distinguishes between gravitational and precessional relationships among moving bodies, the latter referring to the vast majority of cosmic relationships, which are non-180-degree and do not involve bodies "falling in" to one another (130.00 533.01, 1009.21). "Precession" is a nuanced term in the synergetics vocabulary, relating to the behavior of gyroscopes, but also to side-effects. (326.13, 1009.92)
Intuitive geometry
Fuller took an intuitive approach to his studies, often going into exhaustive empirical detail while at the same time seeking to cast his findings in their most general philosophical context.
For example, his sphere packing studies led him to generalize a formula for polyhedral numbers: 2 P F2 + 2, where F stands for "frequency" (the number of intervals between balls along an edge) and P for a product of low order primes (some integer). He then related the "multiplicative 2" and "additive 2" in this formula to the convex versus concave aspects of shapes, and to their polar spinnability respectively.
These same polyhedra, developed through sphere packing and related by tetrahedral mensuration, he then spun around their various poles to form great circle networks and corresponding triangular tiles on the surface of a sphere. He exhaustively catalogues the central and surface angles of these spherical triangles and their related chord factors.
Fuller was continually on the lookout for ways to connect the dots, often purely speculatively. As an example of "dot connecting" he sought to relate the 120 basic disequilibrium LCD triangles of the spherical icosahedron to the plane net of his A module.(915.11Fig. 913.01, Table 905.65)
The Jitterbug Transformation provides a unifying dynamic in this work, with much significance attached to the doubling and quadrupling of edges that occur, when a cuboctahedron is collapsed through icosahedral, octahedral and tetrahedral stages, then inside-outed and re-expanded in a complementary fashion. The JT forms a bridge between 3,4-fold rotationally symmetric shapes, and the 5-fold family, such as a rhombic triacontahedron, which later he analyzes in terms of the T module, another tetrahedral wedge with the same volume as his A and B modules.
He models energy transfer between systems by means of the double-edged octahedron and its ability to turn into a spiral (tetrahelix). Energy lost to one system always reappeared somewhere else in his Universe. He modeled a threshold between associative and disassociative energy patterns with his T-to-E module transformation ("E" for "Einstein").(Fig 986.411A)
"Synergetics" is in some ways a library of potential "science cartoons" (scenarios) described in prose and not heavily dependent upon mathematical notations. His demystification of a gyroscope's behavior in terms of a hammer thrower, pea shooter, and garden hose, is a good example of his commitment to using accessible metaphors. (Fig. 826.02A)
His modular dissection of a space-filling tetrahedron or MITE (minimum tetrahedron) into 2 A and 1 B module serves as a basis for more speculations about energy, the former being more energy conservative, the latter more dissipative in his analysis.(986.422921.20, 921.30). His focus is reminiscent of later cellular automaton studies in that tessellating modules would affect their neighbors over successive time intervals.
Social commentary
Synergetics informed Fuller's social analysis of the human condition. He identified "ephemeralization" as the trend towards accomplishing more with less physical resources, as a result of increasing comprehension of such "generalized principles" as E = Mc2.
He remained concerned that humanity's conditioned reflexes were not keeping pace with its engineering potential, emphasizing the "touch and go" nature of our current predicament.
Fuller hoped the streamlining effects of a more 60-degree-based approach within natural philosophy would help bridge the gap between C.P. Snow's "two cultures" and result in a greater level of scientific literacy in the general population. (935.24)
Academic acceptance
Fuller hoped to gain traction for his nomenclature in part by dedicating Synergetics to H.S.M. Coxeter (with permission) and by citing page 71 of the latter's Regular Polytopes in order to suggest where his A & B modules (depicted above), and by extension, many of his other concepts, might enter the mathematical literature (see Fig. 950.12).
Dr. Arthur Loeb provided a prologue and an appendix to Synergetics discussing its overlap with crystallography, chemistry and virology.
Fuller originally achieved more acceptance in the humanities as a poet-philosopher and architect. For example, he features in The Pound Era by Hugh Kenner published in 1971, prior to the publication of Synergetics. The journal Nature circled Operating Manual for Spaceship Earth as one of the five most formative books on sustainability.
Errata
A major error, caught by Fuller himself, involved a misapplication of his Synergetics Constant in Synergetics 1, which led to the mistaken belief he had discovered a radius 1 sphere of 5 tetravolumes. He provided a correction in Synergetics 2 in the form of his T&E module thread. (986.206 - 986.212)
About synergy
Synergetics refers to synergy: either the concept of whole system behaviors not predicted by the behaviors of its parts, or as another term for negative entropy — negentropy.
See also
Cloud Nine
Dymaxion House
Geodesic dome
Quadray coordinates
Octet Truss
Tensegrity
Tetrahedron
Trilinear coordinates
Notes
References
R. Buckminster Fuller (in collaboration with E.J. Applewhite, Synergetics: Explorations in the Geometry of Thinking , online edition hosted by R. W. Gray with permission , originally published by Macmillan , Vol. 1 in 1975 (with a preface and contribution by Arthur L. Loeb; ), and Vol. 2 in 1979, as two hard-bound volumes, re-editions in paperback.
Amy Edmondson, A Fuller Explanation, EmergentWorld LLC, 2007.
External links
Complete On-Line Edition of Fuller's Synergetics
Synergetics on the Web by K. Urner
Synergetics at the Buckminster Fuller Institute
Holism
Cybernetics
Buckminster Fuller | 0.788351 | 0.981747 | 0.773961 |
Derivatization | Derivatization is a technique used in chemistry which converts a chemical compound into a product (the reaction's derivate) of similar chemical structure, called a derivative.
Generally, a specific functional group of the compound participates in the derivatization reaction and transforms the educt to a derivate of deviating reactivity, solubility, boiling point, melting point, aggregate state, or chemical composition. Resulting new chemical properties can be used for quantification or separation of the educt.
Derivatization techniques are frequently employed in chemical analysis of mixtures and in surface analysis, e.g. in X-ray photoelectron spectroscopy where newly incorporated atoms label characteristic groups.
Derivatization reactions
Several characteristics are desirable for a derivatization reaction:
The reaction is reliable and proceeds to completion. Less unreacted starting material will simplify analysis. Also, this allows a small amount of analyte to be used.
The reaction is general, allowing a wide range of substrates, yet specific to a single functional group, reducing complicating interference.
The products are relatively stable, and form no degradation products within a reasonable period, facilitating analysis.
Some examples of good derivatization reactions are the formation of esters and amides via acyl chlorides.
Classical qualitative organic analysis
Classical qualitative organic analysis usually involves reacting an unknown sample with various reagents; a positive test usually involves a change in appearance — color, precipitation, etc.
These tests may be extended to give sub-gram scale products. These products may be purified by recrystallization, and their melting points taken. An example is the formation of 2,4-dinitrophenylhydrazones from ketones and 2,4-dinitrophenylhydrazine.
By consulting an appropriate reference table such as in Vogel's, the identity of the starting material may be deduced. The use of derivatives has traditionally been used to determine or confirm the identity of an unknown substance. However, due to the wide range of chemical compounds now known, it is unrealistic for these tables to be exhaustive. Modern spectroscopic and spectrometric techniques have made this technique obsolete for all but pedagogical purposes.
For gas chromatography
Polar N-H and O-H groups on which give hydrogen bonding may be converted to relatively nonpolar groups on a relatively nonvolatile compound. The resultant product may be less polar, thus more volatile, allowing analysis by gas chromatography. Bulky, nonpolar silyl groups are often used for this purpose.
Chiral derivatizing agent
Chiral derivatizing agents react with enantiomers to give diastereomers. Since diastereomers have different physical properties, they may be further analyzed by HPLC and NMR spectroscopy.
References
Chemical processes | 0.797468 | 0.970483 | 0.773929 |
Solubility | In chemistry, solubility is the ability of a substance, the solute, to form a solution with another substance, the solvent. Insolubility is the opposite property, the inability of the solute to form such a solution.
The extent of the solubility of a substance in a specific solvent is generally measured as the concentration of the solute in a saturated solution, one in which no more solute can be dissolved. At this point, the two substances are said to be at the solubility equilibrium. For some solutes and solvents, there may be no such limit, in which case the two substances are said to be "miscible in all proportions" (or just "miscible").
The solute can be a solid, a liquid, or a gas, while the solvent is usually solid or liquid. Both may be pure substances, or may themselves be solutions. Gases are always miscible in all proportions, except in very extreme situations, and a solid or liquid can be "dissolved" in a gas only by passing into the gaseous state first.
The solubility mainly depends on the composition of solute and solvent (including their pH and the presence of other dissolved substances) as well as on temperature and pressure. The dependency can often be explained in terms of interactions between the particles (atoms, molecules, or ions) of the two substances, and of thermodynamic concepts such as enthalpy and entropy.
Under certain conditions, the concentration of the solute can exceed its usual solubility limit. The result is a supersaturated solution, which is metastable and will rapidly exclude the excess solute if a suitable nucleation site appears.
The concept of solubility does not apply when there is an irreversible chemical reaction between the two substances, such as the reaction of calcium hydroxide with hydrochloric acid; even though one might say, informally, that one "dissolved" the other. The solubility is also not the same as the rate of solution, which is how fast a solid solute dissolves in a liquid solvent. This property depends on many other variables, such as the physical form of the two substances and the manner and intensity of mixing.
The concept and measure of solubility are extremely important in many sciences besides chemistry, such as geology, biology, physics, and oceanography, as well as in engineering, medicine, agriculture, and even in non-technical activities like painting, cleaning, cooking, and brewing. Most chemical reactions of scientific, industrial, or practical interest only happen after the reagents have been dissolved in a suitable solvent. Water is by far the most common such solvent.
The term "soluble" is sometimes used for materials that can form colloidal suspensions of very fine solid particles in a liquid. The quantitative solubility of such substances is generally not well-defined, however.
Quantification of solubility
The solubility of a specific solute in a specific solvent is generally expressed as the concentration of a saturated solution of the two. Any of the several ways of expressing concentration of solutions can be used, such as the mass, volume, or amount in moles of the solute for a specific mass, volume, or mole amount of the solvent or of the solution.
Per quantity of solvent
In particular, chemical handbooks often express the solubility as grams of solute per 100 millilitres of solvent (g/(100 mL), often written as g/100 ml), or as grams of solute per decilitre of solvent (g/dL); or, less commonly, as grams of solute per litre of solvent (g/L). The quantity of solvent can instead be expressed in mass, as grams of solute per 100 grams of solvent (g/(100 g), often written as g/100 g), or as grams of solute per kilogram of solvent (g/kg). The number may be expressed as a percentage in this case, and the abbreviation "w/w" may be used to indicate "weight per weight". (The values in g/L and g/kg are similar for water, but that may not be the case for other solvents.)
Alternatively, the solubility of a solute can be expressed in moles instead of mass. For example, if the quantity of solvent is given in kilograms, the value is the molality of the solution (mol/kg).
Per quantity of solution
The solubility of a substance in a liquid may also be expressed as the quantity of solute per quantity of solution, rather than of solvent. For example, following the common practice in titration, it may be expressed as moles of solute per litre of solution (mol/L), the molarity of the latter.
In more specialized contexts the solubility may be given by the mole fraction (moles of solute per total moles of solute plus solvent) or by the mass fraction at equilibrium (mass of solute per mass of solute plus solvent). Both are dimensionless numbers between 0 and 1 which may be expressed as percentages (%).
Liquid and gaseous solutes
For solutions of liquids or gases in liquids, the quantities of both substances may be given volume rather than mass or mole amount; such as litre of solute per litre of solvent, or litre of solute per litre of solution. The value may be given as a percentage, and the abbreviation "v/v" for "volume per volume" may be used to indicate this choice.
Conversion of solubility values
Conversion between these various ways of measuring solubility may not be trivial, since it may require knowing the density of the solution — which is often not measured, and cannot be predicted. While the total mass is conserved by dissolution, the final volume may be different from both the volume of the solvent and the sum of the two volumes.
Moreover, many solids (such as acids and salts) will dissociate in non-trivial ways when dissolved; conversely, the solvent may form coordination complexes with the molecules or ions of the solute. In those cases, the sum of the moles of molecules of solute and solvent is not really the total moles of independent particles solution. To sidestep that problem, the solubility per mole of solution is usually computed and quoted as if the solute does not dissociate or form complexes—that is, by pretending that the mole amount of solution is the sum of the mole amounts of the two substances.
Qualifiers used to describe extent of solubility
The extent of solubility ranges widely, from infinitely soluble (without limit, i.e. miscible) such as ethanol in water, to essentially insoluble, such as titanium dioxide in water. A number of other descriptive terms are also used to qualify the extent of solubility for a given application. For example, U.S. Pharmacopoeia gives the following terms, according to the mass msv of solvent required to dissolve one unit of mass msu of solute: (The solubilities of the examples are approximate, for water at 20–25 °C.)
The thresholds to describe something as insoluble, or similar terms, may depend on the application. For example, one source states that substances are described as "insoluble" when their solubility is less than 0.1 g per 100 mL of solvent.
Molecular view
Solubility occurs under dynamic equilibrium, which means that solubility results from the simultaneous and opposing processes of dissolution and phase joining (e.g. precipitation of solids). A stable state of the solubility equilibrium occurs when the rates of dissolution and re-joining are equal, meaning the relative amounts of dissolved and non-dissolved materials are equal. If the solvent is removed, all of the substance that had dissolved is recovered.
The term solubility is also used in some fields where the solute is altered by solvolysis. For example, many metals and their oxides are said to be "soluble in hydrochloric acid", although in fact the aqueous acid irreversibly degrades the solid to give soluble products. Most ionic solids dissociate when dissolved in polar solvents. In those cases where the solute is not recovered upon evaporation of the solvent, the process is referred to as solvolysis. The thermodynamic concept of solubility does not apply straightforwardly to solvolysis.
When a solute dissolves, it may form several species in the solution. For example, an aqueous solution of cobalt(II) chloride can afford , each of which interconverts.
Factors affecting solubility
Solubility is defined for specific phases. For example, the solubility of aragonite and calcite in water are expected to differ, even though they are both polymorphs of calcium carbonate and have the same chemical formula.
The solubility of one substance in another is determined by the balance of intermolecular forces between the solvent and solute, and the entropy change that accompanies the solvation. Factors such as temperature and pressure will alter this balance, thus changing the solubility.
Solubility may also strongly depend on the presence of other species dissolved in the solvent, for example, complex-forming anions (ligands) in liquids. Solubility will also depend on the excess or deficiency of a common ion in the solution, a phenomenon known as the common-ion effect. To a lesser extent, solubility will depend on the ionic strength of solutions. The last two effects can be quantified using the equation for solubility equilibrium.
For a solid that dissolves in a redox reaction, solubility is expected to depend on the potential (within the range of potentials under which the solid remains the thermodynamically stable phase). For example, solubility of gold in high-temperature water is observed to be almost an order of magnitude higher (i.e. about ten times higher) when the redox potential is controlled using a highly oxidizing Fe3O4-Fe2O3 redox buffer than with a moderately oxidizing Ni-NiO buffer.
Solubility (metastable, at concentrations approaching saturation) also depends on the physical size of the crystal or droplet of solute (or, strictly speaking, on the specific surface area or molar surface area of the solute). For quantification, see the equation in the article on solubility equilibrium. For highly defective crystals, solubility may increase with the increasing degree of disorder. Both of these effects occur because of the dependence of solubility constant on the Gibbs energy of the crystal. The last two effects, although often difficult to measure, are of practical importance. For example, they provide the driving force for precipitate aging (the crystal size spontaneously increasing with time).
Temperature
The solubility of a given solute in a given solvent is function of temperature. Depending on the change in enthalpy (ΔH) of the dissolution reaction, i.e., on the endothermic (ΔH > 0) or exothermic (ΔH < 0) character of the dissolution reaction, the solubility of a given compound may increase or decrease with temperature. The van 't Hoff equation relates the change of solubility equilibrium constant (Ksp) to temperature change and to reaction enthalpy change. For most solids and liquids, their solubility increases with temperature because their dissolution reaction is endothermic (ΔH > 0). In liquid water at high temperatures, (e.g. that approaching the critical temperature), the solubility of ionic solutes tends to decrease due to the change of properties and structure of liquid water; the lower dielectric constant results in a less polar solvent and in a change of hydration energy affecting the ΔG of the dissolution reaction.
Gaseous solutes exhibit more complex behavior with temperature. As the temperature is raised, gases usually become less soluble in water (exothermic dissolution reaction related to their hydration) (to a minimum, which is below 120 °C for most permanent gases), but more soluble in organic solvents (endothermic dissolution reaction related to their solvation).
The chart shows solubility curves for some typical solid inorganic salts in liquid water (temperature is in degrees Celsius, i.e. kelvins minus 273.15). Many salts behave like barium nitrate and disodium hydrogen arsenate, and show a large increase in solubility with temperature (ΔH > 0). Some solutes (e.g. sodium chloride in water) exhibit solubility that is fairly independent of temperature (ΔH ≈ 0). A few, such as calcium sulfate (gypsum) and cerium(III) sulfate, become less soluble in water as temperature increases (ΔH < 0). This is also the case for calcium hydroxide (portlandite), whose solubility at 70 °C is about half of its value at 25 °C. The dissolution of calcium hydroxide in water is also an exothermic process (ΔH < 0). As dictated by the van 't Hoff equation and Le Chatelier's principle, lowe temperatures favorsf dissolution of Ca(OH)2. Portlandite solubility increases at low temperature. This temperature dependence is sometimes referred to as "retrograde" or "inverse" solubility. Occasionally, a more complex pattern is observed, as with sodium sulfate, where the less soluble decahydrate crystal (mirabilite) loses water of crystallization at 32 °C to form a more soluble anhydrous phase (thenardite) with a smaller change in Gibbs free energy (ΔG) in the dissolution reaction.
The solubility of organic compounds nearly always increases with temperature. The technique of recrystallization, used for purification of solids, depends on a solute's different solubilities in hot and cold solvent. A few exceptions exist, such as certain cyclodextrins.
Pressure
For condensed phases (solids and liquids), the pressure dependence of solubility is typically weak and usually neglected in practice. Assuming an ideal solution, the dependence can be quantified as:
where the index iterates the components, is the mole fraction of the -th component in the solution, is the pressure, the index refers to constant temperature, is the partial molar volume of the -th component in the solution, is the partial molar volume of the -th component in the dissolving solid, and is the universal gas constant.
The pressure dependence of solubility does occasionally have practical significance. For example, precipitation fouling of oil fields and wells by calcium sulfate (which decreases its solubility with decreasing pressure) can result in decreased productivity with time.
Solubility of gases
Henry's law is used to quantify the solubility of gases in solvents. The solubility of a gas in a solvent is directly proportional to the partial pressure of that gas above the solvent. This relationship is similar to Raoult's law and can be written as:
where is a temperature-dependent constant (for example, 769.2 L·atm/mol for dioxygen (O2) in water at 298 K), is the partial pressure (in atm), and is the concentration of the dissolved gas in the liquid (in mol/L).
The solubility of gases is sometimes also quantified using Bunsen solubility coefficient.
In the presence of small bubbles, the solubility of the gas does not depend on the bubble radius in any other way than through the effect of the radius on pressure (i.e. the solubility of gas in the liquid in contact with small bubbles is increased due to pressure increase by Δp = 2γ/r; see Young–Laplace equation).
Henry's law is valid for gases that do not undergo change of chemical speciation on dissolution. Sieverts' law shows a case when this assumption does not hold.
The carbon dioxide solubility in seawater is also affected by temperature, pH of the solution, and by the carbonate buffer. The decrease of solubility of carbon dioxide in seawater when temperature increases is also an important retroaction factor (positive feedback) exacerbating past and future climate changes as observed in ice cores from the Vostok site in Antarctica. At the geological time scale, because of the Milankovich cycles, when the astronomical parameters of the Earth orbit and its rotation axis progressively change and modify the solar irradiance at the Earth surface, temperature starts to increase. When a deglaciation period is initiated, the progressive warming of the oceans releases CO2 into the atmosphere because of its lower solubility in warmer sea water. In turn, higher levels of CO2 in the atmosphere increase the greenhouse effect and carbon dioxide acts as an amplifier of the general warming.
Polarity
A popular aphorism used for predicting solubility is "like dissolves like" also expressed in the Latin language as "Similia similibus solventur". This statement indicates that a solute will dissolve best in a solvent that has a similar chemical structure to itself, based on favorable entropy of mixing. This view is simplistic, but it is a useful rule of thumb. The overall solvation capacity of a solvent depends primarily on its polarity. For example, a very polar (hydrophilic) solute such as urea is very soluble in highly polar water, less soluble in fairly polar methanol, and practically insoluble in non-polar solvents such as benzene. In contrast, a non-polar or lipophilic solute such as naphthalene is insoluble in water, fairly soluble in methanol, and highly soluble in non-polar benzene.
In even more simple terms a simple ionic compound (with positive and negative ions) such as sodium chloride (common salt) is easily soluble in a highly polar solvent (with some separation of positive (δ+) and negative (δ-) charges in the covalent molecule) such as water, as thus the sea is salty as it accumulates dissolved salts since early geological ages.
The solubility is favored by entropy of mixing (ΔS) and depends on enthalpy of dissolution (ΔH) and the hydrophobic effect. The free energy of dissolution (Gibbs energy) depends on temperature and is given by the relationship: ΔG = ΔH – TΔS. Smaller ΔG means greater solubility.
Chemists often exploit differences in solubilities to separate and purify compounds from reaction mixtures, using the technique of liquid-liquid extraction. This applies in vast areas of chemistry from drug synthesis to spent nuclear fuel reprocessing.
Rate of dissolution
Dissolution is not an instantaneous process. The rate of solubilization (in kg/s) is related to the solubility product and the surface area of the material. The speed at which a solid dissolves may depend on its crystallinity or lack thereof in the case of amorphous solids and the surface area (crystallite size) and the presence of polymorphism. Many practical systems illustrate this effect, for example in designing methods for controlled drug delivery. In some cases, solubility equilibria can take a long time to establish (hours, days, months, or many years; depending on the nature of the solute and other factors).
The rate of dissolution can be often expressed by the Noyes–Whitney equation or the Nernst and Brunner equation of the form:
where:
= mass of dissolved material
= time
= surface area of the interface between the dissolving substance and the solvent
= diffusion coefficient
= thickness of the boundary layer of the solvent at the surface of the dissolving substance
= mass concentration of the substance on the surface
= mass concentration of the substance in the bulk of the solvent
For dissolution limited by diffusion (or mass transfer if mixing is present), is equal to the solubility of the substance. When the dissolution rate of a pure substance is normalized to the surface area of the solid (which usually changes with time during the dissolution process), then it is expressed in kg/m2s and referred to as "intrinsic dissolution rate". The intrinsic dissolution rate is defined by the United States Pharmacopeia.
Dissolution rates vary by orders of magnitude between different systems. Typically, very low dissolution rates parallel low solubilities, and substances with high solubilities exhibit high dissolution rates, as suggested by the Noyes-Whitney equation.
Theories of solubility
Solubility product
Solubility constants are used to describe saturated solutions of ionic compounds of relatively low solubility (see solubility equilibrium). The solubility constant is a special case of an equilibrium constant. Since it is a product of ion concentrations in equilibrium, it is also known as the solubility product. It describes the balance between dissolved ions from the salt and undissolved salt. The solubility constant is also "applicable" (i.e. useful) to precipitation, the reverse of the dissolving reaction. As with other equilibrium constants, temperature can affect the numerical value of solubility constant. While the solubility constant is not as simple as solubility, the value of this constant is generally independent of the presence of other species in the solvent.
Other theories
The Flory–Huggins solution theory is a theoretical model describing the solubility of polymers. The Hansen solubility parameters and the Hildebrand solubility parameters are empirical methods for the prediction of solubility. It is also possible to predict solubility from other physical constants such as the enthalpy of fusion.
The octanol-water partition coefficient, usually expressed as its logarithm (Log P), is a measure of differential solubility of a compound in a hydrophobic solvent (1-octanol) and a hydrophilic solvent (water). The logarithm of these two values enables compounds to be ranked in terms of hydrophilicity (or hydrophobicity).
The energy change associated with dissolving is usually given per mole of solute as the enthalpy of solution.
Applications
Solubility is of fundamental importance in a large number of scientific disciplines and practical applications, ranging from ore processing and nuclear reprocessing to the use of medicines, and the transport of pollutants.
Solubility is often said to be one of the "characteristic properties of a substance", which means that solubility is commonly used to describe the substance, to indicate a substance's polarity, to help to distinguish it from other substances, and as a guide to applications of the substance. For example, indigo is described as "insoluble in water, alcohol, or ether but soluble in chloroform, nitrobenzene, or concentrated sulfuric acid".
Solubility of a substance is useful when separating mixtures. For example, a mixture of salt (sodium chloride) and silica may be separated by dissolving the salt in water, and filtering off the undissolved silica. The synthesis of chemical compounds, by the milligram in a laboratory, or by the ton in industry, both make use of the relative solubilities of the desired product, as well as unreacted starting materials, byproducts, and side products to achieve separation.
Another example of this is the synthesis of benzoic acid from phenylmagnesium bromide and dry ice. Benzoic acid is more soluble in an organic solvent such as dichloromethane or diethyl ether, and when shaken with this organic solvent in a separatory funnel, will preferentially dissolve in the organic layer. The other reaction products, including the magnesium bromide, will remain in the aqueous layer, clearly showing that separation based on solubility is achieved. This process, known as liquid–liquid extraction, is an important technique in synthetic chemistry. Recycling is used to ensure maximum extraction.
Differential solubility
In flowing systems, differences in solubility often determine the dissolution-precipitation driven transport of species. This happens when different parts of the system experience different conditions. Even slightly different conditions can result in significant effects, given sufficient time.
For example, relatively low solubility compounds are found to be soluble in more extreme environments, resulting in geochemical and geological effects of the activity of hydrothermal fluids in the Earth's crust. These are often the source of high quality economic mineral deposits and precious or semi-precious gems. In the same way, compounds with low solubility will dissolve over extended time (geological time), resulting in significant effects such as extensive cave systems or Karstic land surfaces.
Solubility of ionic compounds in water
Some ionic compounds (salts) dissolve in water, which arises because of the attraction between positive and negative charges (see: solvation). For example, the salt's positive ions (e.g. Ag+) attract the partially negative oxygen atom in . Likewise, the salt's negative ions (e.g. Cl−) attract the partially positive hydrogens in . Note: the oxygen atom is partially negative because it is more electronegative than hydrogen, and vice versa (see: chemical polarity).
However, there is a limit to how much salt can be dissolved in a given volume of water. This concentration is the solubility and related to the solubility product, Ksp. This equilibrium constant depends on the type of salt ( vs. , for example), temperature, and the common ion effect.
One can calculate the amount of that will dissolve in 1 liter of pure water as follows:
Ksp = [Ag+] × [Cl−] / M2 (definition of solubility product; M = mol/L)
Ksp = 1.8 × 10−10 (from a table of solubility products)
[Ag+] = [Cl−], in the absence of other silver or chloride salts, so
[Ag+]2 = 1.8 × 10−10 M2
[Ag+] = 1.34 × 10−5 mol/L
The result: 1 liter of water can dissolve 1.34 × 10−5 moles of at room temperature. Compared with other salts, is poorly soluble in water. For instance, table salt has a much higher Ksp = 36 and is, therefore, more soluble. The following table gives an overview of solubility rules for various ionic compounds.
Solubility of organic compounds
The principle outlined above under polarity, that like dissolves like, is the usual guide to solubility with organic systems. For example, petroleum jelly will dissolve in gasoline because both petroleum jelly and gasoline are non-polar hydrocarbons. It will not, on the other hand, dissolve in ethyl alcohol or water, since the polarity of these solvents is too high. Sugar will not dissolve in gasoline, since sugar is too polar in comparison with gasoline. A mixture of gasoline and sugar can therefore be separated by filtration or extraction with water.
Solid solution
This term is often used in the field of metallurgy to refer to the extent that an alloying element will dissolve into the base metal without forming a separate phase. The solvus or solubility line (or curve) is the line (or lines) on a phase diagram that give the limits of solute addition. That is, the lines show the maximum amount of a component that can be added to another component and still be in solid solution. In the solid's crystalline structure, the 'solute' element can either take the place of the matrix within the lattice (a substitutional position; for example, chromium in iron) or take a place in a space between the lattice points (an interstitial position; for example, carbon in iron).
In microelectronic fabrication, solid solubility refers to the maximum concentration of impurities one can place into the substrate.
In solid compounds (as opposed to elements), the solubility of a solute element can also depend on the phases separating out in equilibrium. For example, amount of Sn soluble in the ZnSb phase can depend significantly on whether the phases separating out in equilibrium are (Zn4Sb3+Sn(L)) or (ZnSnSb2+Sn(L)). Besides these, the ZnSb compound with Sn as a solute can separate out into other combinations of phases after the solubility limit is reached depending on the initial chemical composition during synthesis. Each combination produces a different solubility of Sn in ZnSb. Hence solubility studies in compounds, concluded upon the first instance of observing secondary phases separating out might underestimate solubility. While the maximum number of phases separating out at once in equilibrium can be determined by the Gibb's phase rule, for chemical compounds there is no limit on the number of such phase separating combinations itself. Hence, establishing the "maximum solubility" in solid compounds experimentally can be difficult, requiring equilibration of many samples. If the dominant crystallographic defect (mostly interstitial or substitutional point defects) involved in the solid-solution can be chemically intuited beforehand, then using some simple thermodynamic guidelines can considerably reduce the number of samples required to establish maximum solubility.
Incongruent dissolution
Many substances dissolve congruently (i.e. the composition of the solid and the dissolved solute stoichiometrically match). However, some substances may dissolve incongruently, whereby the composition of the solute in solution does not match that of the solid. This solubilization is accompanied by alteration of the "primary solid" and possibly formation of a secondary solid phase. However, in general, some primary solid also remains and a complex solubility equilibrium establishes. For example, dissolution of albite may result in formation of gibbsite.
.
In this case, the solubility of albite is expected to depend on the solid-to-solvent ratio. This kind of solubility is of great importance in geology, where it results in formation of metamorphic rocks.
In principle, both congruent and incongruent dissolution can lead to the formation of secondary solid phases in equilibrium. So, in the field of Materials Science, the solubility for both cases is described more generally on chemical composition phase diagrams.
Solubility prediction
Solubility is a property of interest in many aspects of science, including but not limited to: environmental predictions, biochemistry, pharmacy, drug-design, agrochemical design, and protein ligand binding. Aqueous solubility is of fundamental interest owing to the vital biological and transportation functions played by water. In addition, to this clear scientific interest in water solubility and solvent effects; accurate predictions of solubility are important industrially. The ability to accurately predict a molecule's solubility represents potentially large financial savings in many chemical product development processes, such as pharmaceuticals. In the pharmaceutical industry, solubility predictions form part of the early stage lead optimisation process of drug candidates. Solubility remains a concern all the way to formulation. A number of methods have been applied to such predictions including quantitative structure–activity relationships (QSAR), quantitative structure–property relationships (QSPR) and data mining. These models provide efficient predictions of solubility and represent the current standard. The draw back such models is that they can lack physical insight. A method founded in physical theory, capable of achieving similar levels of accuracy at an sensible cost, would be a powerful tool scientifically and industrially.
Methods founded in physical theory tend to use thermodynamic cycles, a concept from classical thermodynamics. The two common thermodynamic cycles used involve either the calculation of the free energy of sublimation (solid to gas without going through a liquid state) and the free energy of solvating a gaseous molecule (gas to solution), or the free energy of fusion (solid to a molten phase) and the free energy of mixing (molten to solution). These two process are represented in the following diagrams.
These cycles have been used for attempts at first principles predictions (solving using the fundamental physical equations) using physically motivated solvent models, to create parametric equations and QSPR models and combinations of the two. The use of these cycles enables the calculation of the solvation free energy indirectly via either gas (in the sublimation cycle) or a melt (fusion cycle). This is helpful as calculating the free energy of solvation directly is extremely difficult. The free energy of solvation can be converted to a solubility value using various formulae, the most general case being shown below, where the numerator is the free energy of solvation, R is the gas constant and T is the temperature in kelvins.
Well known fitted equations for solubility prediction are the general solubility equations. These equations stem from the work of Yalkowsky et al. The original formula is given first, followed by a revised formula which takes a different assumption of complete miscibility in octanol.
These equations are founded on the principles of the fusion cycle.
See also
Notes
References
External links
Chemical properties
Physical properties
Solutions
Underwater diving physics | 0.775545 | 0.99791 | 0.773924 |
Continuum (measurement) | Continuum (: continua or continuums) theories or models explain variation as involving gradual quantitative transitions without abrupt changes or discontinuities. In contrast, categorical theories or models explain variation using qualitatively different states.
In physics
In physics, for example, the space-time continuum model describes space and time as part of the same continuum rather than as separate entities. A spectrum in physics, such as the electromagnetic spectrum, is often termed as either continuous (with energy at all wavelengths) or discrete (energy at only certain wavelengths).
In contrast, quantum mechanics uses quanta, certain defined amounts (i.e. categorical amounts) which are distinguished from continuous amounts.
In mathematics and philosophy
A good introduction to the philosophical issues involved is John Lane Bell's essa in the Stanford Encyclopedia of Philosophy. A significant divide is provided by the law of excluded middle. It determines the divide between intuitionistic continua such as Brouwer's and Lawvere's, and classical ones such as Stevin's and Robinson's.
Bell isolates two distinct historical conceptions of infinitesimal, one by Leibniz and one by Nieuwentijdt, and argues that Leibniz's conception was implemented in Robinson's hyperreal continuum, whereas Nieuwentijdt's, in Lawvere's smooth infinitesimal analysis, characterized by the presence of nilsquare infinitesimals: "It may be said that Leibniz recognized the need for the first, but not the second type of infinitesimal and Nieuwentijdt, vice versa. It is of interest to note that Leibnizian infinitesimals (differentials) are realized in nonstandard analysis, and nilsquare infinitesimals in smooth infinitesimal analysis".
In social sciences, psychology and psychiatry
In social sciences in general, psychology and psychiatry included, data about differences between individuals, like any data, can be collected and measured using different levels of measurement. Those levels include dichotomous (a person either has a personality trait or not) and non-dichotomous approaches. While the non-dichotomous approach allows for understanding that everyone lies somewhere on a particular personality dimension, the dichotomous (nominal categorical and ordinal) approaches only seek to confirm that a particular person either has or does not have a particular mental disorder.
Expert witnesses particularly are trained to help courts in translating the data into the legal (e.g. 'guilty' vs. 'not guilty') dichotomy, which apply to law, sociology and ethics.
In linguistics
In linguistics, the range of dialects spoken over a geographical area that differ slightly between neighboring areas is known as a dialect continuum. A language continuum is a similar description for the merging of neighboring languages without a clear defined boundary. Examples of dialect or language continuums include the varieties of Italian or German; and the Romance languages, Arabic languages, or Bantu languages.
References
External links
Continuity and infinitesimals, John Bell, Stanford Encyclopedia of Philosophy
Concepts in metaphysics
Concepts in physics
Concepts in the philosophy of science
Mathematical concepts | 0.788965 | 0.980897 | 0.773893 |
Agronomy | Agronomy is the science and technology of producing and using plants by agriculture for food, fuel, fiber, chemicals, recreation, or land conservation. Agronomy has come to include research of plant genetics, plant physiology, meteorology, and soil science. It is the application of a combination of sciences such as biology, chemistry, economics, ecology, earth science, and genetics. Professionals of agronomy are termed agronomists.
History
Agronomy has a long and rich history dating to the Neolithic Revolution. Some of the earliest practices of agronomy are found in ancient civilizations, including Ancient Egypt, Mesopotamia, China and India. They developed various techniques for the management of soil fertility, irrigation and crop rotation.
During the 18th and 19th centuries, advances in science led to the development of modern agronomy. German chemist Justus von Liebig and John Bennett Lawes, an English entrepreneur, contributed to the understanding of plant nutrition and soil chemistry. Their work laid for the establishment of modern fertilizers and agricultural practices.
Agronomy continued to evolve with the development of new technology and practices in the 20th century. From the 1960s, the Green Revolution saw the introduction of high-yield variety of crops, modern fertilizers and improvement of agricultural practices. It led to an increase of global food production to help reduce hunger and poverty in many parts of the world.
Plant breeding
This topic of agronomy involves selective breeding of plants to produce the best crops for various conditions. Plant breeding has increased crop yields and has improved the nutritional value of numerous crops, including corn, soybeans, and wheat. It has also resulted in the development of new types of plants. For example, a hybrid grain named triticale was produced by crossbreeding rye and wheat. Triticale contains more usable protein than does either rye or wheat. Agronomy has also been instrumental for fruit and vegetable production research. Furthermore, the application of plant breeding for turfgrass development has resulted in a reduction in the demand for fertilizer and water inputs (requirements), as well as turf-types with higher disease resistance.
Biotechnology
Agronomists use biotechnology to extend and expedite the development of desired characteristics. Biotechnology is often a laboratory activity requiring field testing of new crop varieties that are developed.
In addition to increasing crop yields agronomic biotechnology is being applied increasingly for novel uses other than food. For example, oilseed is at present used mainly for margarine and other food oils, but it can be modified to produce fatty acids for detergents, substitute fuels and petrochemicals.
Soil science
Agronomists study sustainable ways to make soils more productive and profitable. They classify soils and analyze them to determine whether they contain nutrients vital for plant growth. Common macronutrients analyzed include compounds of nitrogen, phosphorus, potassium, calcium, magnesium, and sulfur. Soil is also assessed for several micronutrients, like zinc and boron. The percentage of organic matter, soil pH, and nutrient holding capacity (cation exchange capacity) are tested in a regional laboratory. Agronomists will interpret these laboratory reports and make recommendations to modify soil nutrients for optimal plant growth.
Soil conservation
Additionally, agronomists develop methods to preserve soil and decrease the effects of [erosion] by wind and water. For example, a technique known as contour plowing may be used to prevent soil erosion and conserve rainfall. Researchers of agronomy also seek ways to use the soil more effectively for solving other problems. Such problems include the disposal of human and animal manure, water pollution, and pesticide accumulation in the soil, as well as preserving the soil for future generations such as the burning of paddocks after crop production. Pasture management techniques include no-till farming, planting of soil-binding grasses along contours on steep slopes, and using contour drains of depths as much as 1 metre.
Agroecology
Agroecology is the management of agricultural systems with an emphasis on ecological and environmental applications. This topic is associated closely with work for sustainable agriculture, organic farming, and alternative food systems and the development of alternative cropping systems.
Theoretical modeling
Theoretical production ecology is the quantitative study of the growth of crops. The plant is treated as a kind of biological factory, which processes light, carbon dioxide, water, and nutrients into harvestable products. The main parameters considered are temperature, sunlight, standing crop biomass, plant production distribution, and nutrient and water supply.
See also
Agricultural engineering
Agricultural policy
Agroecology
Agrology
Agrophysics
Crop farming
Food systems
Horticulture
Green Revolution
Vegetable farming
References
Bibliography
Wendy B. Murphy, The Future World of Agriculture, Watts, 1984.
Antonio Saltini, Storia delle scienze agrarie, 4 vols, Bologna 1984–89, , , ,
External links
The American Society of Agronomy (ASA)
Crop Science Society of America (CSSA)
Soil Science Society of America (SSSA)
European Society for Agronomy
The National Agricultural Library (NAL) – Comprehensive agricultural library.
Information System for Agriculture and Food Research
.
Applied sciences
Plant agriculture | 0.776627 | 0.996466 | 0.773883 |
Qualitative research | Qualitative research is a type of research that aims to gather and analyse non-numerical (descriptive) data in order to gain an understanding of individuals' social reality, including understanding their attitudes, beliefs, and motivation. This type of research typically involves in-depth interviews, focus groups, or field observations in order to collect data that is rich in detail and context. Qualitative research is often used to explore complex phenomena or to gain insight into people's experiences and perspectives on a particular topic. It is particularly useful when researchers want to understand the meaning that people attach to their experiences or when they want to uncover the underlying reasons for people's behavior. Qualitative methods include ethnography, grounded theory, discourse analysis, and interpretative phenomenological analysis. Qualitative research methods have been used in sociology, anthropology, political science, psychology, communication studies, social work, folklore, educational research, information science and software engineering research.
Background
Qualitative research has been informed by several strands of philosophical thought and examines aspects of human life, including culture, expression, beliefs, morality, life stress, and imagination. Contemporary qualitative research has been influenced by a number of branches of philosophy, for example, positivism, postpositivism, critical theory, and constructivism.
The historical transitions or 'moments' in qualitative research, together with the notion of 'paradigms' (Denzin & Lincoln, 2005), have received widespread popularity over the past decades. However, some scholars have argued that the adoptions of paradigms may be counterproductive and lead to less philosophically engaged communities.
Approaches to inquiry
The use of nonquantitative material as empirical data has been growing in many areas of the social sciences, including learning sciences, development psychology and cultural psychology. Several philosophical and psychological traditions have influenced investigators' approaches to qualitative research, including phenomenology, social constructionism, symbolic interactionism, and positivism.
Philosophical traditions
Phenomenology refers to the philosophical study of the structure of an individual's consciousness and general subjective experience. Approaches to qualitative research based on constructionism, such as grounded theory, pay attention to how the subjectivity of both the researcher and the study participants can affect the theory that develops out of the research. The symbolic interactionist approach to qualitative research examines how individuals and groups develop an understanding of the world. Traditional positivist approaches to qualitative research seek a more objective understanding of the social world. Qualitative researchers have also been influenced by the sociology of knowledge and the work of Alfred Schütz, Peter L. Berger, Thomas Luckmann, and Harold Garfinkel.
Sources of data
Qualitative researchers use different sources of data to understand the topic they are studying. These data sources include interview transcripts, videos of social interactions, notes, verbal reports and artifacts such as books or works of art. The case study method exemplifies qualitative researchers' preference for depth, detail, and context. Data triangulation is also a strategy used in qualitative research. Autoethnography, the study of self, is a qualitative research method in which the researcher uses his or her personal experience to understand an issue.
Grounded theory is an inductive type of research, based on ("grounded" in) a very close look at the empirical observations a study yields. Thematic analysis involves analyzing patterns of meaning. Conversation analysis is primarily used to analyze spoken conversations. Biographical research is concerned with the reconstruction of life histories, based on biographical narratives and documents. Narrative inquiry studies the narratives that people use to describe their experience.
Data collection
Qualitative researchers may gather information through observations, note-taking, interviews, focus groups (group interviews), documents, images and artifacts.
Interviews
Research interviews are an important method of data collection in qualitative research. An interviewer is usually a professional or paid researcher, sometimes trained, who poses questions to the interviewee, in an alternating series of usually brief questions and answers, to elicit information. Compared to something like a written survey, qualitative interviews allow for a significantly higher degree of intimacy, with participants often revealing personal information to their interviewers in a real-time, face-to-face setting. As such, this technique can evoke an array of significant feelings and experiences within those being interviewed. Sociologists Bredal, Stefansen and Bjørnholt identified three "participant orientations", that they described as "telling for oneself", "telling for others" and "telling for the researcher". They also proposed that these orientations implied "different ethical contracts between the participant and researcher".
Participant observation
In participant observation ethnographers get to understand a culture by directly participating in the activities of the culture they study. Participant observation extends further than ethnography and into other fields, including psychology. For example, by training to be an EMT and becoming a participant observer in the lives of EMTs, Palmer studied how EMTs cope with the stress associated with some of the gruesome emergencies they deal with.
Recursivity
In qualitative research, the idea of recursivity refers to the emergent nature of research design. In contrast to standardized research methods, recursivity embodies the idea that the qualitative researcher can change a study's design during the data collection phase.
Recursivity in qualitative research procedures contrasts to the methods used by scientists who conduct experiments. From the perspective of the scientist, data collection, data analysis, discussion of the data in the context of the research literature, and drawing conclusions should be each undertaken once (or at most a small number of times). In qualitative research however, data are collected repeatedly until one or more specific stopping conditions are met, reflecting a nonstatic attitude to the planning and design of research activities. An example of this dynamism might be when the qualitative researcher unexpectedly changes their research focus or design midway through a study, based on their first interim data analysis. The researcher can even make further unplanned changes based on another interim data analysis. Such an approach would not be permitted in an experiment. Qualitative researchers would argue that recursivity in developing the relevant evidence enables the researcher to be more open to unexpected results and emerging new constructs.
Data analysis
Qualitative researchers have a number of analytic strategies available to them.
Coding
In general, coding refers to the act of associating meaningful ideas with the data of interest. In the context of qualitative research, interpretative aspects of the coding process are often explicitly recognized and articulated; coding helps to produce specific words or short phrases believed to be useful abstractions from the data.
Pattern thematic analysis
Data may be sorted into patterns for thematic analyses as the primary basis for organizing and reporting the study findings.
Content analysis
According to Krippendorf, "Content analysis is a research technique for making replicable and valid inference from data to their context" (p. 21). It is applied to documents and written and oral communication. Content analysis is an important building block in the conceptual analysis of qualitative data. It is frequently used in sociology. For example, content analysis has been applied to research on such diverse aspects of human life as changes in perceptions of race over time, the lifestyles of contractors, and even reviews of automobiles.
Issues
Computer-assisted qualitative data analysis software (CAQDAS)
Contemporary qualitative data analyses can be supported by computer programs (termed computer-assisted qualitative data analysis software). These programs have been employed with or without detailed hand coding or labeling. Such programs do not supplant the interpretive nature of coding. The programs are aimed at enhancing analysts' efficiency at applying, retrieving, and storing the codes generated from reading the data. Many programs enhance efficiency in editing and revising codes, which allow for more effective work sharing, peer review, data examination, and analysis of large datasets.
Common qualitative data analysis software includes:
ATLAS.ti
Dedoose (mixed methods)
MAXQDA (mixed methods)
NVivo
QDA MINER
A criticism of quantitative coding approaches is that such coding sorts qualitative data into predefined (nomothetic) categories that are reflective of the categories found in objective science. The variety, richness, and individual characteristics of the qualitative data are reduced or, even, lost.
To defend against the criticism that qualitative approaches to data are too subjective, qualitative researchers assert that by clearly articulating their definitions of the codes they use and linking those codes to the underlying data, they preserve some of the richness that might be lost if the results of their research boiled down to a list of predefined categories. Qualitative researchers also assert that their procedures are repeatable, which is an idea that is valued by quantitatively oriented researchers.
Sometimes researchers rely on computers and their software to scan and reduce large amounts of qualitative data. At their most basic level, numerical coding schemes rely on counting words and phrases within a dataset; other techniques involve the analysis of phrases and exchanges in analyses of conversations. A computerized approach to data analysis can be used to aid content analysis, especially when there is a large corpus to unpack.
Trustworthiness
A central issue in qualitative research is trustworthiness (also known as credibility or, in quantitative studies, validity). There are many ways of establishing trustworthiness, including member check, interviewer corroboration, peer debriefing, prolonged engagement, negative case analysis, auditability, confirmability, bracketing, and balance. Data triangulation and eliciting examples of interviewee accounts are two of the most commonly used methods of establishing the trustworthiness of qualitative studies.
Transferability of results has also been considered as an indicator of validity.
Limitations of qualitative research
Qualitative research is not without limitations. These limitations include participant reactivity, the potential for a qualitative investigator to over-identify with one or more study participants, "the impracticality of the Glaser-Strauss idea that hypotheses arise from data unsullied by prior expectations," the inadequacy of qualitative research for testing cause-effect hypotheses, and the Baconian character of qualitative research. Participant reactivity refers to the fact that people often behave differently when they know they are being observed. Over-identifying with participants refers to a sympathetic investigator studying a group of people and ascribing, more than is warranted, a virtue or some other characteristic to one or more participants. Compared to qualitative research, experimental research and certain types of nonexperimental research (e.g., prospective studies), although not perfect, are better means for drawing cause-effect conclusions.
Glaser and Strauss, influential members of the qualitative research community, pioneered the idea that theoretically important categories and hypotheses can emerge "naturally" from the observations a qualitative researcher collects, provided that the researcher is not guided by preconceptions. The ethologist David Katz wrote "a hungry animal divides the environment into edible and inedible things....Generally speaking, objects change...according to the needs of the animal." Karl Popper carrying forward Katz's point wrote that "objects can be classified and can become similar or dissimilar, only in this way--by being related to needs and interests. This rule applied not only to animals but also to scientists." Popper made clear that observation is always selective, based on past research and the investigators' goals and motives and that preconceptionless research is impossible.
The Baconian character of qualitative research refers to the idea that a qualitative researcher can collect enough observations such that categories and hypotheses will emerge from the data. Glaser and Strauss developed the idea of theoretical sampling by way of collecting observations until theoretical saturation is obtained and no additional observations are required to understand the character of the individuals under study. Bertrand Russell suggested that there can be no orderly arrangement of observations such that a hypothesis will jump out of those ordered observations; some provisional hypothesis usually guides the collection of observations.
In psychology
Community psychology
Autobiographical narrative research has been conducted in the field of community psychology. A selection of autobiographical narratives of community psychologists can be found in the book Six Community Psychologists Tell Their Stories: History, Contexts, and Narrative.
Educational psychology
Edwin Farrell used qualitative methods to understand the social reality of at-risk high school students. Later he used similar methods to understand the reality of successful high school students who came from the same neighborhoods as the at-risk students he wrote about in his previously mentioned book.
Health psychology
In the field of health psychology, qualitative methods have become increasingly employed in research on understanding health and illness and how health and illness are socially constructed in everyday life. Since then, a broad range of qualitative methods have been adopted by health psychologists, including discourse analysis, thematic analysis, narrative analysis, and interpretative phenomenological analysis. In 2015, the journal Health Psychology published a special issue on qualitative research.<ref>Gough, B., & Deatrick, J.A. (eds.)(2015). Qualitative research in health psychology [special issue]. Health Psychology, 34 (4).</ref>
Industrial and organizational psychology
According to Doldor and colleagues organizational psychologists extensively use qualitative research "during the design and implementation of activities like organizational change, training needs analyses, strategic reviews, and employee development plans."
Occupational health psychology
Although research in the field of occupational health psychology (OHP) has predominantly been quantitatively oriented, some OHP researchers have employed qualitative methods. Qualitative research efforts, if directed properly, can provide advantages for quantitatively oriented OHP researchers. These advantages include help with (1) theory and hypothesis development, (2) item creation for surveys and interviews, (3) the discovery of stressors and coping strategies not previously identified, (4) interpreting difficult-to-interpret quantitative findings, (5) understanding why some stress-reduction interventions fail and others succeed, and (6) providing rich descriptions of the lived lives of people at work.Schonfeld, I. S., & Farrell, E. (2010). Qualitative methods can enrich quantitative research on occupational stress: An example from one occupational group. In D. C. Ganster & P. L. Perrewé (Eds.), Research in occupational stress and wellbeing series. Vol. 8. New developments in theoretical and conceptual approaches to job stress (pp. 137-197). Bingley, UK: Emerald. Some OHP investigators have united qualitative and quantitative methods within a single study (e.g., Elfering et al., [2005]); these investigators have used qualitative methods to assess job stressors that are difficult to ascertain using standard measures and well validated standardized instruments to assess coping behaviors and dependent variables such as mood.
Social media psychology
Since the advent of social media in the early 2000s, formerly private accounts of personal experiences have become widely shared with the public by millions of people around the world. Disclosures are often made openly, which has contributed to social media's key role in movements like the #metoo movement.
The abundance of self-disclosure on social media has presented an unprecedented opportunity for qualitative and mixed methods researchers; mental health problems can now be investigated qualitatively more widely, at a lower cost, and with no intervention by the researchers. To take advantage of these data, researchers need to have mastered the tools for conducting qualitative research.
Academic journals
Consumption Markets & Culture
Journal of Consumer Research
Qualitative Inquiry Qualitative Market Research Qualitative Research The Qualitative ReportSee also
Computer-assisted qualitative data analysis software (CAQDAS)
References
Further reading
Adler, P. A. & Adler, P. (1987). : context and meaning in social inquiry / edited by Richard Jessor, Anne Colby, and Richard A. Shweder
Baškarada, S. (2014) "Qualitative Case Study Guidelines", in The Qualitative Report, 19(40): 1-25. Available from
Creswell, J. W. (2003). Research design: Qualitative, quantitative, and mixed method approaches. Thousand Oaks, CA: Sage Publications.
Denzin, N. K., & Lincoln, Y. S. (2000). Handbook of qualitative research ( 2nd ed.). Thousand Oaks, CA: Sage Publications.
Denzin, N. K., & Lincoln, Y. S. (2011). The SAGE Handbook of qualitative research ( 4th ed.). Los Angeles: Sage Publications.
DeWalt, K. M. & DeWalt, B. R. (2002). Participant observation. Walnut Creek, CA: AltaMira Press.
Fischer, C.T. (Ed.) (2005). Qualitative research methods for psychologists: Introduction through empirical studies. Academic Press. .
Franklin, M. I. (2012), "Understanding Research: Coping with the Quantitative-Qualitative Divide". London/New York. Routledge
Giddens, A. (1990). The consequences of modernity. Stanford, CA: Stanford University Press.
Gubrium, J. F. and J. A. Holstein. (2000). "The New Language of Qualitative Method." New York: Oxford University Press.
Gubrium, J. F. and J. A. Holstein (2009). "Analyzing Narrative Reality." Thousand Oaks, CA: Sage.
Gubrium, J. F. and J. A. Holstein, eds. (2000). "Institutional Selves: Troubled Identities in a Postmodern World." New York: Oxford University Press.
Hammersley, M. (2008) Questioning Qualitative Inquiry, London, Sage.
Hammersley, M. (2013) What is qualitative research?, London, Bloomsbury.
Holliday, A. R. (2007). Doing and Writing Qualitative Research, 2nd Edition. London: Sage Publications
Holstein, J. A. and J. F. Gubrium, eds. (2012). "Varieties of Narrative Analysis." Thousand Oaks, CA: Sage.
Kaminski, Marek M. (2004). Games Prisoners Play. Princeton University Press. .
Malinowski, B. (1922/1961). Argonauts of the Western Pacific. New York: E. P. Dutton.
Miles, M. B. & Huberman, A. M. (1994). Qualitative Data Analysis. Thousand Oaks, CA: Sage.
Pamela Maykut, Richard Morehouse. 1994 Beginning Qualitative Research. Falmer Press.
Pernecky, T. (2016). Epistemology and Metaphysics for Qualitative Research. London, UK: Sage Publications.
Patton, M. Q. (2002). Qualitative research & evaluation methods ( 3rd ed.). Thousand Oaks, CA: Sage Publications.
Pawluch D. & Shaffir W. & Miall C. (2005). Doing Ethnography: Studying Everyday Life. Toronto, ON Canada: Canadian Scholars' Press.
Racino, J. (1999). Policy, Program Evaluation and Research in Disability: Community Support for All." New York, NY: Haworth Press (now Routledge imprint, Francis and Taylor, 2015).
Ragin, C. C. (1994). Constructing Social Research: The Unity and Diversity of Method, Pine Forge Press,
Riessman, Catherine K. (1993). "Narrative Analysis." Thousand Oaks, CA: Sage.
Rosenthal, Gabriele (2018). Interpretive Social Research. An Introduction. Göttingen, Germany: Universitätsverlag Göttingen.
Savin-Baden, M. and Major, C. (2013). "Qualitative research: The essential guide to theory and practice." London, Rutledge.
Silverman, David, (ed), (2011), "Qualitative Research: Issues of Theory, Method and Practice". Third Edition. London, Thousand Oaks, New Delhi, Sage Publications
Stebbins, Robert A. (2001) Exploratory Research in the Social Sciences. Thousand Oaks, CA: Sage.
Taylor, Steven J., Bogdan, Robert, Introduction to Qualitative Research Methods, Wiley, 1998,
Van Maanen, J. (1988) Tales of the field: on writing ethnography, Chicago: University of Chicago Press.
Wolcott, H. F. (1995). The art of fieldwork. Walnut Creek, CA: AltaMira Press.
Wolcott, H. F. (1999). Ethnography: A way of seeing. Walnut Creek, CA: AltaMira Press.
Ziman, John (2000). Real Science: what it is, and what it means''. Cambridge, Uk: Cambridge University Press.
External links
Qualitative Philosophy
C.Wright Mills, On intellectual Craftsmanship, The Sociological Imagination,1959
Participant Observation, Qualitative research methods: a Data collector's field guide
Analyzing and Reporting Qualitative Market Research
Overview of available QDA Software
Videos
Research methods
Psychological methodology | 0.775717 | 0.997602 | 0.773857 |
CPK coloring | In chemistry, the CPK coloring (for Corey–Pauling–Koltun) is a popular color convention for distinguishing atoms of different chemical elements in molecular models.
History
August Wilhelm von Hofmann was apparently the first to introduce molecular models into organic chemistry, following August Kekule's introduction of the theory of chemical structure in 1858, and Alexander Crum Brown's introduction of printed structural formulas in 1861. At a Friday Evening Discourse at London's Royal Institution on April 7, 1865, he displayed molecular models of simple organic substances such as methane, ethane, and methyl chloride, which he had had constructed from differently colored table croquet balls connected together with thin brass tubes. Hofmann's original colour scheme (carbon = black, hydrogen = white, nitrogen = blue, oxygen = red, chlorine = green, and sulphur = yellow) has evolved into the later color schemes.
In 1952, Corey and Pauling published a description of space-filling models of proteins and other biomolecules that they had been building at Caltech. Their models represented atoms by faceted hardwood balls, painted in different bright colors to indicate the respective chemical elements. Their color schema included
White for hydrogen
Black for carbon
Sky blue for nitrogen
Red for oxygen
They also built smaller models using plastic balls with the same color schema.
In 1965 Koltun patented an improved version of the Corey and Pauling modeling technique. In his patent he mentions the following colors:
White for hydrogen
Black for carbon
Blue for nitrogen
Red for oxygen
Deep yellow for sulfur
Purple for phosphorus
Light, medium, medium dark, and dark green for the halogens (F, Cl, Br, I)
Silver for metals (Co, Fe, Ni, Cu)
Typical assignments
Typical CPK color assignments include:
Several of the CPK colors refer mnemonically to colors of the pure elements or notable compound. For example, hydrogen is a colorless gas, carbon as charcoal, graphite or coke is black, sulfur powder is yellow, chlorine is a greenish gas, bromine is a dark red liquid, iodine in ether is violet, amorphous phosphorus is red, rust is dark orange-red, etc. For some colors, such as those of oxygen and nitrogen, the inspiration is less clear. Perhaps red for oxygen is inspired by the fact that oxygen is normally required for combustion or that the oxygen-bearing chemical in blood, hemoglobin, is bright red, and the blue for nitrogen by the fact that nitrogen is the main component of Earth's atmosphere, which appears to human eyes as being colored sky blue.
It is likely that the CPK colours were inspired by models in the nineteenth century. In 1865, August Wilhelm von Hofmann, in a talk at the Royal Institution in London, was using models made from croquet balls to illustrate valence, so he used the coloured balls available to him. (At the time, croquet was the most popular sport in England, so the balls were plentiful.) "On the Combining Power of Atoms", Chemical News, 12 (1865, 176–9, 189, states that "Hofmann, at a lecture given at the Royal Institution in April 1865 made use of croquet balls of different colours to represent various kinds of atoms (e.g. carbon black, hydrogen white, chlorine green, 'fiery' oxygen red, nitrogen blue)."
Modern variants
The following table shows colors assigned to each element by some popular software products.
Column C is the original assignment by Corey and Pauling.
Column K is that of Koltun's patent.
Column J is the color scheme used by the molecular visualizer Jmol.
Column R is the scheme used by Rasmol; when two colors are shown, the second one is valid for versions 2.7.3 and later.
Column P consists of the colors in the PubChem database managed by the United States National Institute of Health.
All colors are approximate and may depend on the display hardware and viewing conditions.
See also
Ball-and-stick model
Molecular graphics
Software for molecular modeling
References
External links
Physical Molecular Models
Color codes
Molecular modelling | 0.780398 | 0.991618 | 0.773857 |
Systematic review | A systematic review is a scholarly synthesis of the evidence on a clearly presented topic using critical methods to identify, define and assess research on the topic. A systematic review extracts and interprets data from published studies on the topic (in the scientific literature), then analyzes, describes, critically appraises and summarizes interpretations into a refined evidence-based conclusion. For example, a systematic review of randomized controlled trials is a way of summarizing and implementing evidence-based medicine.
While a systematic review may be applied in the biomedical or health care context, it may also be used where an assessment of a precisely defined subject can advance understanding in a field of research. A systematic review may examine clinical tests, public health interventions, environmental interventions, social interventions, adverse effects, qualitative evidence syntheses, methodological reviews, policy reviews, and economic evaluations.
Systematic reviews are closely related to meta-analyses, and often the same instance will combine both (being published with a subtitle of "a systematic review and meta-analysis"). The distinction between the two is that a meta-analysis uses statistical methods to induce a single number from the pooled data set (such as an effect size), whereas the strict definition of a systematic review excludes that step. However, in practice, when one is mentioned the other may often be involved, as it takes a systematic review to assemble the information that a meta-analysis analyzes, and people sometimes refer to an instance as a systematic review even if it includes the meta-analytical component.
An understanding of systematic reviews and how to implement them in practice is common for professionals in health care, public health, and public policy.
Systematic reviews contrast with a type of review often called a narrative review. Systematic reviews and narrative reviews both review the literature (the scientific literature), but the term literature review without further specification refers to a narrative review.
Characteristics
A systematic review can be designed to provide a thorough summary of current literature relevant to a research question. A systematic review uses a rigorous and transparent approach for research synthesis, with the aim of assessing and, where possible, minimizing bias in the findings. While many systematic reviews are based on an explicit quantitative meta-analysis of available data, there are also qualitative reviews and other types of mixed-methods reviews which adhere to standards for gathering, analyzing and reporting evidence.
Systematic reviews of quantitative data or mixed-method reviews sometimes use statistical techniques (meta-analysis) to combine results of eligible studies. Scoring levels are sometimes used to rate the quality of the evidence depending on the methodology used, although this is discouraged by the Cochrane Library. As evidence rating can be subjective, multiple people may be consulted to resolve any scoring differences between how evidence is rated.
The EPPI-Centre, Cochrane, and the Joanna Briggs Institute have been influential in developing methods for combining both qualitative and quantitative research in systematic reviews. Several reporting guidelines exist to standardise reporting about how systematic reviews are conducted. Such reporting guidelines are not quality assessment or appraisal tools. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement suggests a standardized way to ensure a transparent and complete reporting of systematic reviews, and is now required for this kind of research by more than 170 medical journals worldwide. Several specialized PRISMA guideline extensions have been developed to support particular types of studies or aspects of the review process, including PRISMA-P for review protocols and PRISMA-ScR for scoping reviews. A list of PRISMA guideline extensions is hosted by the EQUATOR (Enhancing the QUAlity and Transparency Of health Research) Network. However, the PRISMA guidelines have been found to be limited to intervention research and the guidelines have to be changed in order to fit non-intervention research. As a result, Non-Interventional, Reproducible, and Open (NIRO) Systematic Reviews was created to counter this limitation.
For qualitative reviews, reporting guidelines include ENTREQ (Enhancing transparency in reporting the synthesis of qualitative research) for qualitative evidence syntheses; RAMESES (Realist And MEta-narrative Evidence Syntheses: Evolving Standards) for meta-narrative and realist reviews; and eMERGe (Improving reporting of Meta-Ethnography) for meta-ethnograph.
Developments in systematic reviews during the 21st century included realist reviews and the meta-narrative approach, both of which addressed problems of variation in methods and heterogeneity existing on some subjects.
Types
There are over 30 types of systematic review and Table 1 below non-exhaustingly summarises some of these. There is not always consensus on the boundaries and distinctions between the approaches described below.
Scoping reviews
Scoping reviews are distinct from systematic reviews in several ways. A scoping review is an attempt to search for concepts by mapping the language and data which surrounds those concepts and adjusting the search method iteratively to synthesize evidence and assess the scope of an area of inquiry. This can mean that the concept search and method (including data extraction, organisation and analysis) are refined throughout the process, sometimes requiring deviations from any protocol or original research plan. A scoping review may often be a preliminary stage before a systematic review, which 'scopes' out an area of inquiry and maps the language and key concepts to determine if a systematic review is possible or appropriate, or to lay the groundwork for a full systematic review. The goal can be to assess how much data or evidence is available regarding a certain area of interest. This process is further complicated if it is mapping concepts across multiple languages or cultures.
As a scoping review should be systematically conducted and reported (with a transparent and repeatable method), some academic publishers categorize them as a kind of 'systematic review', which may cause confusion. Scoping reviews are helpful when it is not possible to carry out a systematic synthesis of research findings, for example, when there are no published clinical trials in the area of inquiry. Scoping reviews are helpful when determining if it is possible or appropriate to carry out a systematic review, and are a useful method when an area of inquiry is very broad, for example, exploring how the public are involved in all stages systematic reviews.
There is still a lack of clarity when defining the exact method of a scoping review as it is both an iterative process and is still relatively new. There have been several attempts to improve the standardisation of the method, for example via a Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guideline extension for scoping reviews (PRISMA-ScR). PROSPERO (the International Prospective Register of Systematic Reviews) does not permit the submission of protocols of scoping reviews, although some journals will publish protocols for scoping reviews.
Stages
While there are multiple kinds of systematic review methods, the main stages of a review can be summarised as follows:
Defining the research question
Some reported that the 'best practices' involve 'defining an answerable question' and publishing the protocol of the review before initiating it to reduce the risk of unplanned research duplication and to enable transparency and consistency between methodology and protocol. Clinical reviews of quantitative data are often structured using the mnemonic PICO, which stands for 'Population or Problem', 'Intervention or Exposure', 'Comparison', and 'Outcome', with other variations existing for other kinds of research. For qualitative reviews, PICo is 'Population or Problem', 'Interest', and 'Context'.
Searching for sources
Relevant criteria can include selecting research that is of good quality and answers the defined question. The search strategy should be designed to retrieve literature that matches the protocol's specified inclusion and exclusion criteria. The methodology section of a systematic review should list all of the databases and citation indices that were searched. The titles and abstracts of identified articles can be checked against predetermined criteria for eligibility and relevance. Each included study may be assigned an objective assessment of methodological quality, preferably by using methods conforming to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement, or the standards of Cochrane.
Common information sources used in searches include scholarly databases of peer-reviewed articles such as MEDLINE, Web of Science, Embase, and PubMed, as well as sources of unpublished literature such as clinical trial registries and grey literature collections. Key references can also be yielded through additional methods such as citation searching, reference list checking (related to a search method called 'pearl growing'), manually searching information sources not indexed in the major electronic databases (sometimes called 'hand-searching'), and directly contacting experts in the field.
To be systematic, searchers must use a combination of search skills and tools such as database subject headings, keyword searching, Boolean operators, and proximity searching, while attempting to balance sensitivity (systematicity) and precision (accuracy). Inviting and involving an experienced information professional or librarian can improve the quality of systematic review search strategies and reporting.
'Extraction' of relevant data
Relevant data are 'extracted' from the data sources according to the review method. The data extraction method is specific to the kind of data, and data extracted on 'outcomes' is only relevant to certain types of reviews. For example, a systematic review of clinical trials might extract data about how the research was done (often called the method or 'intervention'), who participated in the research (including how many people), how it was paid for (for example, funding sources) and what happened (the outcomes). Relevant data are being extracted and 'combined' in an intervention effect review, where a meta-analysis is possible.
Assess the eligibility of the data
This stage involves assessing the eligibility of data for inclusion in the review, by judging it against criteria identified at the first stage. This can include assessing if a data source meets the eligibility criteria, and recording why decisions about inclusion or exclusion in the review were made. Software can be used to support the selection process, including text mining tools and machine learning, which can automate aspects of the process. The 'Systematic Review Toolbox' is a community driven, web-based catalogue of tools, to help reviewers chose appropriate tools for reviews.
Analyse and combine the data
Analysing and combining data can provide an overall result from all the data. Because this combined result may use qualitative or quantitative data from all eligible sources of data, it is considered more reliable as it provides better evidence, as the more data included in reviews, the more confident we can be of conclusions. When appropriate, some systematic reviews include a meta-analysis, which uses statistical methods to combine data from multiple sources. A review might use quantitative data, or might employ a qualitative meta-synthesis, which synthesises data from qualitative studies. A review may also bring together the findings from quantitative and qualitative studies in a mixed methods or overarching synthesis. The combination of data from a meta-analysis can sometimes be visualised. One method uses a forest plot (also called a blobbogram). In an intervention effect review, the diamond in the 'forest plot' represents the combined results of all the data included. An example of a 'forest plot' is the Cochrane Collaboration logo. The logo is a forest plot of one of the first reviews which showed that corticosteroids given to women who are about to give birth prematurely can save the life of the newborn child.
Recent visualisation innovations include the albatross plot, which plots p-values against sample sizes, with approximate effect-size contours superimposed to facilitate analysis. The contours can be used to infer effect sizes from studies that have been analysed and reported in diverse ways. Such visualisations may have advantages over other types when reviewing complex interventions.
Communication and dissemination
Once these stages are complete, the review may be published, disseminated, and translated into practice after being adopted as evidence. The UK National Institute for Health Research (NIHR) defines dissemination as "getting the findings of research to the people who can make use of them to maximise the benefit of the research without delay".
Some users do not have time to invest in reading large and complex documents and/or may lack awareness or be unable to access newly published research. Researchers are therefore developing skills to use creative communication methods such as illustrations, blogs, infographics and board games to share the findings of systematic reviews.
Automation
Living systematic reviews are a newer kind of semi-automated, up-to-date online summaries of research that are updated as new research becomes available. The difference between a living systematic review and a conventional systematic review is the publication format. Living systematic reviews are "dynamic, persistent, online-only evidence summaries, which are updated rapidly and frequently".
The automation or semi-automation of the systematic process itself is increasingly being explored. While little evidence exists to demonstrate it is as accurate or involves less manual effort, efforts that promote training and using artificial intelligence for the process are increasing.
Research fields
Health and medicine
Current use of systematic reviews in medicine
Many organisations around the world use systematic reviews, with the methodology depending on the guidelines being followed. Organisations which use systematic reviews in medicine and human health include the National Institute for Health and Care Excellence (NICE, UK), the Agency for Healthcare Research and Quality (AHRQ, US), and the World Health Organization. Most notable among international organisations is Cochrane, a group of over 37,000 specialists in healthcare who systematically review randomised trials of the effects of prevention, treatments, and rehabilitation as well as health systems interventions. They sometimes also include the results of other types of research. Cochrane Reviews are published in The Cochrane Database of Systematic Reviews section of the Cochrane Library. The 2015 impact factor for The Cochrane Database of Systematic Reviews was 6.103, and it was ranked 12th in the Medicine, General & Internal category.
There are several types of systematic reviews, including:
Intervention reviews assess the benefits and harms of interventions used in healthcare and health policy.
Diagnostic test accuracy reviews assess how well a diagnostic test performs in diagnosing and detecting a particular disease. For conducting diagnostic test accuracy reviews, free software such as MetaDTA and CAST-HSROC in the graphical user interface is available.
Methodology reviews address issues relevant to how systematic reviews and clinical trials are conducted and reported.
Qualitative reviews synthesize qualitative evidence to address questions on aspects other than effectiveness.
Prognosis reviews address the probable course or future outcome(s) of people with a health problem.
Overviews of Systematic Reviews (OoRs) compile multiple pieces of evidence from systematic reviews into a single accessible document, sometimes referred to as umbrella reviews.
Living systematic reviews are continually updated, incorporating relevant new evidence as it becomes available.
Rapid reviews are a form of knowledge synthesis that "accelerates the process of conducting a traditional systematic review through streamlining or omitting specific methods to produce evidence for stakeholders in a resource-efficient manner".
Reviews of complex health interventions in complex systems are to improve evidence synthesis and guideline development.
Patient and public involvement in systematic reviews
There are various ways patients and the public can be involved in producing systematic reviews and other outputs. Tasks for public members can be organised as 'entry level' or higher. Tasks include:
Joining a collaborative volunteer effort to help categorise and summarise healthcare evidence
Data extraction and risk of bias assessment
Translation of reviews into other languages
A systematic review of how people were involved in systematic reviews aimed to document the evidence-base relating to stakeholder involvement in systematic reviews and to use this evidence to describe how stakeholders have been involved in systematic reviews. Thirty percent involved patients and/or carers. The ACTIVE framework provides a way to describe how people are involved in systematic review and may be used as a way to support systematic review authors in planning people's involvement. Standardised Data on Initiatives (STARDIT) is another proposed way of reporting who has been involved in which tasks during research, including systematic reviews.
There has been some criticism of how Cochrane prioritises systematic reviews. Cochrane has a project that involved people in helping identify research priorities to inform Cochrane Reviews. In 2014, the Cochrane–Wikipedia partnership was formalised.
Environmental health and toxicology
Systematic reviews are a relatively recent innovation in the field of environmental health and toxicology. Although mooted in the mid-2000s, the first full frameworks for conduct of systematic reviews of environmental health evidence were published in 2014 by the US National Toxicology Program's Office of Health Assessment and Translation and the Navigation Guide at the University of California San Francisco's Program on Reproductive Health and the Environment. Uptake has since been rapid, with the estimated number of systematic reviews in the field doubling since 2016 and the first consensus recommendations on best practice, as a precursor to a more general standard, being published in 2020.
Social, behavioural, and educational
In 1959, social scientist and social work educator Barbara Wootton published one of the first contemporary systematic reviews of literature on anti-social behavior as part of her work, Social Science and Social Pathology.
Several organisations use systematic reviews in social, behavioural, and educational areas of evidence-based policy, including the National Institute for Health and Care Excellence (NICE, UK), Social Care Institute for Excellence (SCIE, UK), the Agency for Healthcare Research and Quality (AHRQ, US), the World Health Organization, the International Initiative for Impact Evaluation (3ie), the Joanna Briggs Institute, and the Campbell Collaboration. The quasi-standard for systematic review in the social sciences is based on the procedures proposed by the Campbell Collaboration, which is one of several groups promoting evidence-based policy in the social sciences.
Others
Some attempts to transfer the procedures from medicine to business research have been made, including a step-by-step approach, and developing a standard procedure for conducting systematic literature reviews in business and economics.
Systematic reviews are increasingly prevalent in other fields, such as international development research. Subsequently, several donors (including the UK Department for International Development (DFID) and AusAid) are focusing more on testing the appropriateness of systematic reviews in assessing the impacts of development and humanitarian interventions.
The Collaboration for Environmental Evidence (CEE) has a journal titled Environmental Evidence, which publishes systematic reviews, review protocols, and systematic maps on the impacts of human activity and the effectiveness of management interventions.
Review tools
A 2022 publication identified 24 systematic review tools and ranked them by inclusion of 30 features deemed most important when performing a systematic review in accordance with best practices. The top six software tools (with at least 21/30 key features) are all proprietary paid platforms, typically web-based, and include:
Giotto Compliance
DistillerSR
Nested Knowledge
EPPI-Reviewer Web
LitStream
JBI SUMARI
The Cochrane Collaboration provides a handbook for systematic reviewers of interventions which "provides guidance to authors for the preparation of Cochrane Intervention reviews." The Cochrane Handbook also outlines steps for preparing a systematic review and forms the basis of two sets of standards for the conduct and reporting of Cochrane Intervention Reviews (MECIR; Methodological Expectations of Cochrane Intervention Reviews). It also contains guidance on integrating patient-reported outcomes into reviews.
Limitations
Out-dated or risk of bias
While systematic reviews are regarded as the strongest form of evidence, a 2003 review of 300 studies found that not all systematic reviews were equally reliable, and that their reporting can be improved by a universally agreed upon set of standards and guidelines. A further study by the same group found that of 100 systematic reviews monitored, 7% needed updating at the time of publication, another 4% within a year, and another 11% within 2 years; this figure was higher in rapidly changing fields of medicine, especially cardiovascular medicine. A 2003 study suggested that extending searches beyond major databases, perhaps into grey literature, would increase the effectiveness of reviews.
Some authors have highlighted problems with systematic reviews, particularly those conducted by Cochrane, noting that published reviews are often biased, out of date, and excessively long. Cochrane reviews have been criticized as not being sufficiently critical in the selection of trials and including too many of low quality. They proposed several solutions, including limiting studies in meta-analyses and reviews to registered clinical trials, requiring that original data be made available for statistical checking, paying greater attention to sample size estimates, and eliminating dependence on only published data. Some of these difficulties were noted as early as 1994:
Methodological limitations of meta-analysis have also been noted. Another concern is that the methods used to conduct a systematic review are sometimes changed once researchers see the available trials they are going to include. Some websites have described retractions of systematic reviews and published reports of studies included in published systematic reviews. Eligibility criteria that is arbitrary may affect the perceived quality of the review.
Limited reporting of data from human studies
The AllTrials campaign report that around half of clinical trials have never reported results and works to improve reporting. 'Positive' trials were twice as likely to be published as those with 'negative' results.
As of 2016, it is legal for-profit companies to conduct clinical trials and not publish the results. For example, in the past 10 years, 8.7 million patients have taken part in trials that have not published results. These factors mean that it is likely there is a significant publication bias, with only 'positive' or perceived favourable results being published. A recent systematic review of industry sponsorship and research outcomes concluded that "sponsorship of drug and device studies by the manufacturing company leads to more favorable efficacy results and conclusions than sponsorship by other sources" and that the existence of an industry bias that cannot be explained by standard 'risk of bias' assessments.
Poor compliance with review reporting guidelines
The rapid growth of systematic reviews in recent years has been accompanied by the attendant issue of poor compliance with guidelines, particularly in areas such as declaration of registered study protocols, funding source declaration, risk of bias data, issues resulting from data abstraction, and description of clear study objectives. A host of studies have identified weaknesses in the rigour and reproducibility of search strategies in systematic reviews. To remedy this issue, a new PRISMA guideline extension called PRISMA-S is being developed. Furthermore, tools and checklists for peer-reviewing search strategies have been created, such as the Peer Review of Electronic Search Strategies (PRESS) guidelines.
A key challenge for using systematic reviews in clinical practice and healthcare policy is assessing the quality of a given review. Consequently, a range of appraisal tools to evaluate systematic reviews have been designed. The two most popular measurement instruments and scoring tools for systematic review quality assessment are AMSTAR 2 (a measurement tool to assess the methodological quality of systematic reviews) and ROBIS (Risk Of Bias In Systematic reviews); however, these are not appropriate for all systematic review types.
History
The first publication that is now recognized as equivalent to a modern systematic review was a 1753 paper by James Lind, which reviewed all of the previous publications about scurvy. Systematic reviews appeared only sporadically until the 1980s, and became common after 2000. More than 10,000 systematic reviews are published each year.
History in medicine
A 1904 British Medical Journal paper by Karl Pearson collated data from several studies in the UK, India and South Africa of typhoid inoculation. He used a meta-analytic approach to aggregate the outcomes of multiple clinical studies. In 1972, Archie Cochrane wrote: "It is surely a great criticism of our profession that we have not organised a critical summary, by specialty or subspecialty, adapted periodically, of all relevant randomised controlled trials". Critical appraisal and synthesis of research findings in a systematic way emerged in 1975 under the term 'meta analysis'. Early syntheses were conducted in broad areas of public policy and social interventions, with systematic research synthesis applied to medicine and health. Inspired by his own personal experiences as a senior medical officer in prisoner of war camps, Archie Cochrane worked to improve the scientific method in medical evidence. His call for the increased use of randomised controlled trials and systematic reviews led to the creation of The Cochrane Collaboration, which was founded in 1993 and named after him, building on the work by Iain Chalmers and colleagues in the area of pregnancy and childbirth.
See also
Critical appraisal
Further research is needed
Systematic searching
Horizon scanning
Literature review
Living review
Meta-analysis
Metascience
Peer review
Review journal
Generalized model aggregation (GMA)
Umbrella review
References
STARDIT report Q101116128.
External links
Systematic Review Tools — Search and list of systematic review software tools
Cochrane Collaboration
MeSH: Review Literature—articles about the review process
MeSH: Review [Publication Type] - limit search results to reviews
Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) Statement , "an evidence-based minimum set of items for reporting in systematic reviews and meta-analyses"
PRISMA Extension for Scoping Reviews (PRISMA-ScR): Checklist and explanation
Animated Storyboard: What Are Systematic Reviews? - Cochrane Consumers and Communication Group
Sysrev - a free platform with open access systematic reviews.
STARDIT - an open access data-sharing system to standardise the way that information about initiatives is reported.
Evidence-based practices
Information science
Meta-analysis
Nursing research | 0.776721 | 0.996309 | 0.773853 |
Structural formula | The structural formula of a chemical compound is a graphic representation of the molecular structure (determined by structural chemistry methods), showing how the atoms are possibly arranged in the real three-dimensional space. The chemical bonding within the molecule is also shown, either explicitly or implicitly. Unlike other chemical formula types, which have a limited number of symbols and are capable of only limited descriptive power, structural formulas provide a more complete geometric representation of the molecular structure. For example, many chemical compounds exist in different isomeric forms, which have different enantiomeric structures but the same molecular formula. There are multiple types of ways to draw these structural formulas such as: Lewis structures, condensed formulas, skeletal formulas, Newman projections, Cyclohexane conformations, Haworth projections, and Fischer projections.
Several systematic chemical naming formats, as in chemical databases, are used that are equivalent to, and as powerful as, geometric structures. These chemical nomenclature systems include SMILES, InChI and CML. These systematic chemical names can be converted to structural formulas and vice versa, but chemists nearly always describe a chemical reaction or synthesis using structural formulas rather than chemical names, because the structural formulas allow the chemist to visualize the molecules and the structural changes that occur in them during chemical reactions. ChemSketch and ChemDraw are popular downloads/websites that allow users to draw reactions and structural formulas, typically in the Lewis Structure style.
Structures in structural formulas
Bonds
Bonds are often shown as a line that connects one atom to another. One line indicates a single bond. Two lines indicate a double bond, and three lines indicate a triple bond. In some structures the atoms in between each bond are specified and shown. However, in some structures, the carbon molecules are not written out specifically. Instead, these carbons are indicated by a corner that forms when two lines connect. Additionally, Hydrogen atoms are implied and not usually drawn out. These can be inferred based on how many other atoms the carbon is attached to. For example, if Carbon A is attached to one other Carbon B, Carbon A will have three hydrogens in order to fill its octet.
Electrons
Electrons are usually shown as colored in circles. One circle indicates one electron. Two circles indicate a pair of electrons. Typically, a pair of electrons will also indicate a negative charge. By using the colored circles, the number of electrons in the valence shell of each respective atom is indicated providing further descriptive information regarding the reactive capacity of that atom in the molecule.
Charges
Oftentimes, atoms will have a positive or negative charge as their octet may not be complete. If the atom is missing a pair of electrons or has a proton, it will have a positive charge. If the atom has electrons that are not bonded to another atom, there will be a negative charge. In structural formulas, the positive charge is indicated by ⊕ , and the negative charge is indicated by ⊖ .
Stereochemistry (Skeletal formula)
Chirality in skeletal formulas is indicated by the Natta projection method. Stereochemistry is used to show the relative spatial arrangement of atoms in a molecule. Wedges are used to show this, and there are two types: dashed and filled. A filled wedge indicates that the atom is in the front of the molecule; it is pointing above the plane of the paper towards the front. A dashed wedge indicates that the atom is behind the molecule; it is pointing below the plane of the paper. When a straight, un-dashed line is used, the atom is in the plane of the paper. This spatial arrangement provides an idea of the molecule in a 3-dimensional space and there are constraints as to how the spatial arrangements can be arranged.
Unspecified stereochemistry
Wavy single bonds represent unknown or unspecified stereochemistry or a mixture of isomers. For example, the adjacent diagram shows the fructose molecule with a wavy bond to the HOCH2- group at the left. In this case the two possible ring structures are in chemical equilibrium with each other and also with the open-chain structure. The ring automatically opens and closes, sometimes closing with one stereochemistry and sometimes with the other.
Skeletal formulas can depict cis and trans isomers of alkenes. Wavy single bonds are the standard way to represent unknown or unspecified stereochemistry or a mixture of isomers (as with tetrahedral stereocenters). A crossed double-bond has been used sometimes, but is no longer considered an acceptable style for general use.
Lewis structures
Lewis structures (or "Lewis dot structures") are flat graphical formulas that show atom connectivity and lone pair or unpaired electrons, but not three-dimensional structure. This notation is mostly used for small molecules. Each line represents the two electrons of a single bond. Two or three parallel lines between pairs of atoms represent double or triple bonds, respectively. Alternatively, pairs of dots may be used to represent bonding pairs. In addition, all non-bonded electrons (paired or unpaired) and any formal charges on atoms are indicated. Through the use of Lewis structures, the placement of electrons, whether it is in a bond or in lone pairs, will allow for the identification of the formal charges of the atoms in the molecule to understand the stability and determine the most likely molecule (based on molecular geometry difference) that would be formed in a reaction. Lewis structures do give some thought to the geometry of the molecule as oftentimes, the bonds are drawn at certain angles to represent the molecule in real life. Lewis structure is best used to calculate formal charges or how atoms bond to each other as both electrons and bonds are shown. Lewis structures give an idea of the molecular and electronic geometry which varies based on the presence of bonds and lone pairs and through this one could determine the bond angles and hybridization as well.
Condensed formulas
In early organic-chemistry publications, where use of graphics was strongly limited, a typographic system arose to describe organic structures in a line of text. Although this system tends to be problematic in application to cyclic compounds, it remains a convenient way to represent simple structures:
CH3CH2OH (ethanol)
Parentheses are used to indicate multiple identical groups, indicating attachment to the nearest non-hydrogen atom on the left when appearing within a formula, or to the atom on the right when appearing at the start of a formula:
(CH3)2CHOH or CH(CH3)2OH (2-propanol)
In all cases, all atoms are shown, including hydrogen atoms. It is also helpful to show the carbonyls where the
C=O is implied through the O being placed in the brackets. For example:
CH3C(O)CH3 (acetone)
Therefore, it is important to look to the left of the atom in the bracket to make sure what atom it is attached to. This is helpful when converting from condensed formula to another form of structural formula such as skeletal formula or Lewis structures. There are different ways to show the various functional groups in the condensed formulas such as aldehyde as CHO, Carboxylic acids as CO2H or COOH, Esters as CO2R or COOR. However, the use of condensed formulas does not give an immediate idea of the molecular geometry of the compound or the number of bonds between the carbons, it needs to be recognized based on the number of atoms attached to the carbons and if there are any charges on the carbon.
Skeletal formulas
Skeletal formulas are the standard notation for more complex organic molecules. In this type of diagram, first used by the organic chemist Friedrich August Kekulé von Stradonitz, the carbon atoms are implied to be located at the vertices (corners) and ends of line segments rather than being indicated with the atomic symbol C. Hydrogen atoms attached to carbon atoms are not indicated: each carbon atom is understood to be associated with enough hydrogen atoms to give the carbon atom four bonds. The presence of a positive or negative charge at a carbon atom takes the place of one of the implied hydrogen atoms. Hydrogen atoms attached to atoms other than carbon must be written explicitly. An additional feature of skeletal formulas is that by adding certain structures the stereochemistry, that is the three-dimensional structure, of the compound can be determined. Often times, the skeletal formula can indicate stereochemistry through the use of wedges instead of lines. Solid wedges represent bonds pointing above the plane of the paper, whereas dashed wedges represent bonds pointing below the plane.
Perspective drawings
Newman projection and sawhorse projection
The Newman projection and the sawhorse projection are used to depict specific conformers or to distinguish vicinal stereochemistry. In both cases, two specific carbon atoms and their connecting bond are the center of attention. The only difference is a slightly different perspective: the Newman projection looking straight down the bond of interest, the sawhorse projection looking at the same bond but from a somewhat oblique vantage point. In the Newman projection, a circle is used to represent a plane perpendicular to the bond, distinguishing the substituents on the front carbon from the substituents on the back carbon. In the sawhorse projection, the front carbon is usually on the left and is always slightly lower. Sometimes, an arrow is used to indicate the front carbon. The sawhorse projection is very similar to a skeletal formula, and it can even use wedges instead of lines to indicate the stereochemistry of the molecule. The sawhorse projection is set apart from the skeletal formulas because the sawhorse projection is not a very good indicator of molecule geometry and molecular arrangement. Both a Newman and Sawhorse Projection can be used to create a Fischer Projection.
Cyclohexane conformations
Certain conformations of cyclohexane and other small-ring compounds can be shown using a standard convention. For example, the standard chair conformation of cyclohexane involves a perspective view from slightly above the average plane of the carbon atoms and indicates clearly which groups are axial (pointing vertically up or down) and which are equatorial (almost horizontal, slightly slanted up or down). Bonds in front may or may not be highlighted with stronger lines or wedges. The conformations progress as follows: chair to half-chair to twist-boat to boat to twist-boat to half-chair to chair. The cyclohexane conformations may also be used to show the potential energy present at each stage as shown in the diagram. The chair conformations (A) have the lowest energy, whereas the half-chair conformations (D) have the highest energy. There is a peak/local maximum at the boat conformation (C), and there are valleys/local minimums at the twist-boat conformations (B). In addition, cyclohexane conformations can be used to indicate if the molecule has any 1,3 diaxial-interactions which are steric interactions between axial substituents on the 1,3, and 5 carbons.
Haworth projection
The Haworth projection is used for cyclic sugars. Axial and equatorial positions are not distinguished; instead, substituents are positioned directly above or below the ring atom to which they are connected. Hydrogen substituents are typically omitted.
However, an important thing to keep in mind while reading an Haworth projection is that the ring structures are not flat. Therefore, Haworth does not provide 3-D shape. Sir Norman Haworth, was a British Chemist, who won a Nobel Prize for his work on Carbohydrates and discovering the structure of Vitamin C. During his discovery, he also deducted different structural formulas which are now referred to as Haworth Projections. In a Haworth Projection a pyranose sugar is depicted as a hexagon and a furanose sugar is depicted as a pentagon. Usually an oxygen is placed at the upper right corner in pyranose and in the upper center in a furanose sugar. The thinner bonds at the top of the ring refer to the bonds as being farther away and the thicker bonds at the bottom of the ring refer to the end of the ring that is closer to the viewer.
Fischer projection
The Fischer projection is mostly used for linear monosaccharides. At any given carbon center, vertical bond lines are equivalent to stereochemical hashed markings, directed away from the observer, while horizontal lines are equivalent to wedges, pointing toward the observer. The projection is unrealistic, as a saccharide would never adopt this multiply eclipsed conformation. Nonetheless, the Fischer projection is a simple way of depicting multiple sequential stereocenters that does not require or imply any knowledge of actual conformation. A Fischer projection will restrict a 3-D molecule to 2-D, and therefore, there are limitations to changing the configuration of the chiral centers. Fischer projections are used to determine the R and S configuration on a chiral carbon and it is done using the Cahn Ingold Prelog rules. It is a convenient way to represent and distinguish between enantiomers and diastereomers.
Limitations
A structural formula is a simplified model that cannot represent certain aspects of chemical structures. For example, formalized bonding may not be applicable to dynamic systems such as delocalized bonds. Aromaticity is such a case and relies on convention to represent the bonding. Different styles of structural formulas may represent aromaticity in different ways, leading to different depictions of the same chemical compound. Another example is formal double bonds where the electron density is spread outside the formal bond, leading to partial double bond character and slow inter-conversion at room temperature. For all dynamic effects, temperature will affect the inter-conversion rates and may change how the structure should be represented. There is no explicit temperature associated with a structural formula, although many assume that it would be standard temperature.
See also
Molecular graph
Chemical formula
Valency interaction formula
Side chain
Chemical structure
Notes
References
External links
The Importance of Structural Formulas
How to get structural formulas using crystallography
Chemical formulas
Chemical structures | 0.778207 | 0.994367 | 0.773823 |
Energy flow (ecology) | Energy flow is the flow of energy through living things within an ecosystem. All living organisms can be organized into producers and consumers, and those producers and consumers can further be organized into a food chain. Each of the levels within the food chain is a trophic level. In order to more efficiently show the quantity of organisms at each trophic level, these food chains are then organized into trophic pyramids. The arrows in the food chain show that the energy flow is unidirectional, with the head of an arrow indicating the direction of energy flow; energy is lost as heat at each step along the way.
The unidirectional flow of energy and the successive loss of energy as it travels up the food web are patterns in energy flow that are governed by thermodynamics, which is the theory of energy exchange between systems. Trophic dynamics relates to thermodynamics because it deals with the transfer and transformation of energy (originating externally from the sun via solar radiation) to and among organisms.
Energetics and the carbon cycle
The first step in energetics is photosynthesis, where in water and carbon dioxide from the air are taken in with energy from the sun, and are converted into oxygen and glucose. Cellular respiration is the reverse reaction, wherein oxygen and sugar are taken in and release energy as they are converted back into carbon dioxide and water. The carbon dioxide and water produced by respiration can be recycled back into plants.
Energy loss can be measured either by efficiency (how much energy makes it to the next level), or by biomass (how much living material exists at those levels at one point in time, measured by standing crop). Of all the net primary productivity at the producer trophic level, in general only 10% goes to the next level, the primary consumers, then only 10% of that 10% goes on to the next trophic level, and so on up the food pyramid. Ecological efficiency may be anywhere from 5% to 20% depending on how efficient or inefficient that ecosystem is. This decrease in efficiency occurs because organisms need to perform cellular respiration to survive, and energy is lost as heat when cellular respiration is performed. That is also why there are fewer tertiary consumers than there are producers.
Primary production
A producer is any organism that performs photosynthesis. Producers are important because they convert energy from the sun into a storable and usable chemical form of energy, glucose, as well as oxygen. The producers themselves can use the energy stored in glucose to perform cellular respiration. Or, if the producer is consumed by herbivores in the next trophic level, some of the energy is passed on up the pyramid. The glucose stored within producers serves as food for consumers, and so it is only through producers that consumers are able to access the sun’s energy. Some examples of primary producers are algae, mosses, and other plants such as grasses, trees, and shrubs.
Chemosynthetic bacteria perform a process similar to photosynthesis, but instead of energy from the sun they use energy stored in chemicals like hydrogen sulfide. This process, referred to as chemosynthesis, usually occurs deep in the ocean at hydrothermal vents that produce heat and chemicals such as hydrogen, hydrogen sulfide and methane. Chemosynthetic bacteria can use the energy in the bonds of the hydrogen sulfide and oxygen to convert carbon dioxide to glucose, releasing water and sulfur in the process. Organisms that consume the chemosynthetic bacteria can take in the glucose and use oxygen to perform cellular respiration, similar to herbivores consuming producers.
One of the factors that controls primary production is the amount of energy that enters the producer(s), which can be measured using productivity. Only one percent of solar energy enters the producer, the rest bounces off or moves through. Gross primary productivity is the amount of energy the producer actually gets. Generally, 60% of the energy that enters the producer goes to the producer’s own respiration. The net primary productivity is the amount that the plant retains after the amount that it used for cellular respiration is subtracted. Another factor controlling primary production is organic/inorganic nutrient levels in the water or soil that the producer is living in.
Secondary production
Secondary production is the use of energy stored in plants converted by consumers to their own biomass. Different ecosystems have different levels of consumers, all end with one top consumer. Most energy is stored in organic matter of plants, and as the consumers eat these plants they take up this energy. This energy in the herbivores and omnivores is then consumed by carnivores. There is also a large amount of energy that is in primary production and ends up being waste or litter, referred to as detritus. The detrital food chain includes a large amount of microbes, macroinvertebrates, meiofauna, fungi, and bacteria. These organisms are consumed by omnivores and carnivores and account for a large amount of secondary production. Secondary consumers can vary widely in how efficient they are in consuming. The efficiency of energy being passed on to consumers is estimated to be around 10%. Energy flow through consumers differs in aquatic and terrestrial environments.
In aquatic environments
Heterotrophs contribute to secondary production and it is dependent on primary productivity and the net primary products. Secondary production is the energy that herbivores and decomposers use and thus depends on primary productivity. Primarily herbivores and decomposers consume all the carbon from two main organic sources in aquatic ecosystems, autochthonous and allochthonous. Autochthonous carbon comes from within the ecosystem and includes aquatic plants, algae and phytoplankton. Allochthonous carbon from outside the ecosystem is mostly dead organic matter from the terrestrial ecosystem entering the water. In stream ecosystems, approximately 66% of annual energy input can be washed downstream. The remaining amount is consumed and lost as heat.
In terrestrial environments
Secondary production is often described in terms of trophic levels, and while this can be useful in explaining relationships it overemphasizes the rarer interactions. Consumers often feed at multiple trophic levels. Energy transferred above the third trophic level is relatively unimportant. The assimilation efficiency can be expressed by the amount of food the consumer has eaten, how much the consumer assimilates and what is expelled as feces or urine. While a portion of the energy is used for respiration, another portion of the energy goes towards biomass in the consumer. There are two major food chains: The primary food chain is the energy coming from autotrophs and passed on to the consumers; and the second major food chain is when carnivores eat the herbivores or decomposers that consume the autotrophic energy. Consumers are broken down into primary consumers, secondary consumers and tertiary consumers. Carnivores have a much higher assimilation of energy, about 80% and herbivores have a much lower efficiency of approximately 20 to 50%. Energy in a system can be affected by animal emigration/immigration. The movements of organisms are significant in terrestrial ecosystems. Energetic consumption by herbivores in terrestrial ecosystems has a low range of ~3-7%. The flow of energy is similar in many terrestrial environments. The fluctuation in the amount of net primary product consumed by herbivores is generally low. This is in large contrast to aquatic environments of lakes and ponds where grazers have a much higher consumption of around ~33%. Ectotherms and endotherms have very different assimilation efficiencies.
Detritivores
Detritivores consume organic material that is decomposing and are in turn consumed by carnivores. Predator productivity is correlated with prey productivity. This confirms that the primary productivity in ecosystems affects all productivity following.
Detritus is a large portion of organic material in ecosystems. Organic material in temperate forests is mostly made up of dead plants, approximately 62%.
In an aquatic ecosystem, leaf matter that falls into streams gets wet and begins to leech organic material. This happens rather quickly and will attract microbes and invertebrates. The leaves can be broken down into large pieces called coarse particulate organic matter (CPOM). The CPOM is rapidly colonized by microbes. Meiofauna is extremely important to secondary production in stream ecosystems. Microbes breaking down and colonizing this leaf matter are very important to the detritovores. The detritovores make the leaf matter more edible by releasing compounds from the tissues; it ultimately helps soften them. As leaves decay nitrogen will decrease since cellulose and lignin in the leaves is difficult to break down. Thus the colonizing microbes bring in nitrogen in order to aid in the decomposition. Leaf breakdown can depend on initial nitrogen content, season, and species of trees. The species of trees can have variation when their leaves fall. Thus the breakdown of leaves is happening at different times, which is called a mosaic of microbial populations.
Species effect and diversity in an ecosystem can be analyzed through their performance and efficiency. In addition, secondary production in streams can be influenced heavily by detritus that falls into the streams; production of benthic fauna biomass and abundance decreased an additional 47–50% during a study of litter removal and exclusion.
Energy flow across ecosystems
Research has demonstrated that primary producers fix carbon at similar rates across ecosystems. Once carbon has been introduced into a system as a viable source of energy, the mechanisms that govern the flow of energy to higher trophic levels vary across ecosystems. Among aquatic and terrestrial ecosystems, patterns have been identified that can account for this variation and have been divided into two main pathways of control: top-down and bottom-up. The acting mechanisms within each pathway ultimately regulate community and trophic level structure within an ecosystem to varying degrees. Bottom-up controls involve mechanisms that are based on resource quality and availability, which control primary productivity and the subsequent flow of energy and biomass to higher trophic levels. Top-down controls involve mechanisms that are based on consumption by consumers. These mechanisms control the rate of energy transfer from one trophic level to another as herbivores or predators feed on lower trophic levels.
Aquatic vs terrestrial ecosystems
Much variation in the flow of energy is found within each type of ecosystem, creating a challenge in identifying variation between ecosystem types. In a general sense, the flow of energy is a function of primary productivity with temperature, water availability, and light availability. For example, among aquatic ecosystems, higher rates of production are usually found in large rivers and shallow lakes than in deep lakes and clear headwater streams. Among terrestrial ecosystems, marshes, swamps, and tropical rainforests have the highest primary production rates, whereas tundra and alpine ecosystems have the lowest. The relationships between primary production and environmental conditions have helped account for variation within ecosystem types, allowing ecologists to demonstrate that energy flows more efficiently through aquatic ecosystems than terrestrial ecosystems due to the various bottom-up and top-down controls in play.
Bottom-up
The strength of bottom-up controls on energy flow are determined by the nutritional quality, size, and growth rates of primary producers in an ecosystem. Photosynthetic material is typically rich in nitrogen (N) and phosphorus (P) and supplements the high herbivore demand for N and P across all ecosystems. Aquatic primary production is dominated by small, single-celled phytoplankton that are mostly composed of photosynthetic material, providing an efficient source of these nutrients for herbivores. In contrast, multi-cellular terrestrial plants contain many large supporting cellulose structures of high carbon but low nutrient value. Because of this structural difference, aquatic primary producers have less biomass per photosynthetic tissue stored within the aquatic ecosystem than in the forests and grasslands of terrestrial ecosystems. This low biomass relative to photosynthetic material in aquatic ecosystems allows for a more efficient turnover rate compared to terrestrial ecosystems. As phytoplankton are consumed by herbivores, their enhanced growth and reproduction rates sufficiently replace lost biomass and, in conjunction with their nutrient dense quality, support greater secondary production.
Additional factors impacting primary production includes inputs of N and P, which occurs at a greater magnitude in aquatic ecosystems. These nutrients are important in stimulating plant growth and, when passed to higher trophic levels, stimulate consumer biomass and growth rate. If either of these nutrients are in short supply, they can limit overall primary production. Within lakes, P tends to be the greater limiting nutrient while both N and P limit primary production in rivers. Due to these limiting effects, nutrient inputs can potentially alleviate the limitations on net primary production of an aquatic ecosystem. Allochthonous material washed into an aquatic ecosystem introduces N and P as well as energy in the form of carbon molecules that are readily taken up by primary producers. Greater inputs and increased nutrient concentrations support greater net primary production rates, which in turn supports greater secondary production.
Top-down
Top-down mechanisms exert greater control on aquatic primary producers due to the roll of consumers within an aquatic food web. Among consumers, herbivores can mediate the impacts of trophic cascades by bridging the flow of energy from primary producers to predators in higher trophic levels. Across ecosystems, there is a consistent association between herbivore growth and producer nutritional quality. However, in aquatic ecosystems, primary producers are consumed by herbivores at a rate four times greater than in terrestrial ecosystems. Although this topic is highly debated, researchers have attributed the distinction in herbivore control to several theories, including producer to consumer size ratios and herbivore selectivity.
Modeling of top-down controls on primary producers suggests that the greatest control on the flow of energy occurs when the size ratio of consumer to primary producer is the highest. The size distribution of organisms found within a single trophic level in aquatic systems is much narrower than that of terrestrial systems. On land, the consumer size ranges from smaller than the plant it consumes, such as an insect, to significantly larger, such as an ungulate, while in aquatic systems, consumer body size within a trophic level varies much less and is strongly correlated with trophic position. As a result, the size difference between producers and consumers is consistently larger in aquatic environments than on land, resulting in stronger herbivore control over aquatic primary producers.
Herbivores can potentially control the fate of organic matter as it is cycled through the food web. Herbivores tend to select nutritious plants while avoiding plants with structural defense mechanisms. Like support structures, defense structures are composed of nutrient poor, high carbon cellulose. Access to nutritious food sources enhances herbivore metabolism and energy demands, leading to greater removal of primary producers. In aquatic ecosystems, phytoplankton are highly nutritious and generally lack defense mechanisms. This results in greater top-down control because consumed plant matter is quickly released back into the system as labile organic waste. In terrestrial ecosystems, primary producers are less nutritionally dense and are more likely to contain defense structures. Because herbivores prefer nutritionally dense plants and avoid plants or plant parts with defense structures, a greater amount of plant matter is left unconsumed within the ecosystem. Herbivore avoidance of low-quality plant matter may be why terrestrial systems exhibit weaker top-down control on the flow of energy.
See also
References
Further reading
Ecology terminology
Energy
Environmental science
Ecological economics | 0.777179 | 0.995515 | 0.773693 |
Carbon-based life | Carbon is a primary component of all known life on Earth, and represents approximately 45–50% of all dry biomass. Carbon compounds occur naturally in great abundance on Earth. Complex biological molecules consist of carbon atoms bonded with other elements, especially oxygen and hydrogen and frequently also nitrogen, phosphorus, and sulfur (collectively known as CHNOPS).
Because it is lightweight and relatively small in size, carbon molecules are easy for enzymes to manipulate. Carbonic anhydrase is part of this process. Carbon has an atomic number of 6 on the periodic table. The carbon cycle is a biogeochemical cycle that is important in maintaining life on Earth over a long time span. The cycle includes carbon sequestration and carbon sinks. Plate tectonics are needed for life over a long time span, and carbon-based life is important in the plate tectonics process. An abundance of iron- and sulfur-based Anoxygenic photosynthesis life forms that lived from 3.80 to 3.85 billion years ago on Earth produces an abundance black shale deposits. These shale deposits increase heat flow and crust buoyancy, especially on the sea floor, helping to increase plate tectonics. Talc is another organic mineral that helps drive plate tectonics. Inorganic processes also help drive plate tectonics. Carbon-based photosynthesis life caused a rise in oxygen on Earth. This increase of oxygen helped plate tectonics form the first continents. It is frequently assumed in astrobiology that if life exists elsewhere in the Universe, it will also be carbon-based. Critics, like Carl Sagan in 1973, refer to this assumption as carbon chauvinism.
Characteristics
Carbon is capable of forming a vast number of compounds, more than any other element, with almost ten million compounds described to date, and yet that is but a fraction of the number of compounds that are theoretically possible under standard conditions. The enormous diversity of carbon compounds, known as organic compounds, has led to a distinction between them and the inorganic compounds that do not contain carbon. The branch of chemistry that studies organic compounds is known as organic chemistry.
Carbon is the 15th most abundant element in the Earth's crust, and the fourth most abundant element in the universe by mass, after hydrogen, helium, and oxygen. Carbon's widespread abundance, its ability to form stable bonds with numerous other elements, and its unusual ability to form polymers at the temperatures commonly encountered on Earth enables it to serve as a common element of all known living organisms. In a 2018 study, carbon was found to compose approximately 550 billion tons of all life on Earth. It is the second most abundant element in the human body by mass (about 18.5%) after oxygen.
The most important characteristics of carbon as a basis for the chemistry of cellular life are that each carbon atom is capable of forming up to four valence bonds with other atoms simultaneously, and that the energy required to make or break a bond with a carbon atom is at an appropriate level for building large and complex molecules which may be both stable and reactive. Carbon atoms bond readily to other carbon atoms; this allows the building of arbitrarily long macromolecules and polymers in a process known as catenation.<ref>Oxford English Dictionary, 1st edition (1889) s.v. 'chain', definition 4g</ref> "What we normally think of as 'life' is based on chains of carbon atoms, with a few other atoms, such as nitrogen or phosphorus", per Stephen Hawking in a 2008 lecture, "carbon [... has the richest chemistry."
Norman Horowitz was the head of the Jet Propulsion Laboratory's bioscience section for the first U.S. mission, Viking Lander of 1976, to successfully land an unmanned probe on the surface of Mars. He considered that the great versatility of the carbon atom makes it the element most likely to provide solutions, even exotic solutions, to the problems of survival on other planets. However, the results of this mission indicated that Mars was presently extremely hostile to carbon-based life. He also considered that, in general, there was only a remote possibility that non-carbon life forms would be able to evolve with genetic information systems capable of self-replication and adaptation.
Key molecules
The most notable classes of biological macromolecules used in the fundamental processes of living organisms include:
Proteins, which are the building blocks from which the structures of living organisms are constructed (this includes almost all enzymes, which catalyse organic chemical reactions).
Amino acid, make up proteins, included the use in genetic code of life.
Nucleic acids, which carry genetic information.
Ribonucleic acid (RNA), production of proteins.
Deoxyribonucleic acid (DNA), nucleic acid in genetic form.
Peptide, building block of proteins.
Lipids, which also store energy, but in a more concentrated form, and which may be stored for extended periods in the bodies of animals.
Phospholipid used in cell membrane.
Carbohydrates, which store energy in a form that can be used by living cells.
Lectin, for binding proteins.
Monosaccharide, simple sugars, including glucose and fructose.
Disaccharides, sugar soluble in water, including lactose, maltose, and sucrose.
Starch, made of amylose and amylopectin, plants energy storage.
Glycogen, energy in animals.
Cellulose, a biopolymer, found in the cell walls of plants.
Fatty acid, two types, saturated fat and unsaturated fat (oil), are stored energy.
Essential fatty acid, needed but not synthesized by the human body.
Steroid, hormone, and used in cell membrane.
Neurotransmitter, are signaling molecules.
Cholesterol, used in the brain and spinal cord of animals.
Wax, found in beeswax and lanolin. Plant wax used for protection.
Water
Liquid water is essential for carbon-based life. Chemical bonding of carbon molecules requires liquid water. Water has the chemical property to make compound-solvent pairing. In humans, 55% to 60% of the body is water. Water provides the reversible hydration of carbon dioxide. Hydration of carbon dioxide is needed in carbon-based life. All life on Earth uses the same biochemistry of carbon. Water is important in life's carbonic anhydrase the interaction of between carbon dioxide and water. Carbonic anhydrase needs a family of carbon base enzymes for the hydration of carbon dioxide and acid–base homeostasis, that regulates PH levels in life. In plant life, liquid water is needed for photosynthesis, the biological process plants use to convert light energy and carbon dioxide into chemical energy.
Other candidates
A few other elements have been proposed as candidates for supporting biological systems and processes as fundamentally as carbon does, for example, processes such as metabolism. The most frequently suggested alternative is silicon. Silicon, atomic number of 14, more than twice the size of carbon, shares a group in the periodic table with carbon, can also form four valence bonds, and also bonds to itself readily, though generally in the form of crystal lattices rather than long chains. Despite these similarities, silicon is considerably more electropositive than carbon, and silicon compounds do not readily recombine into different permutations in a manner that would plausibly support lifelike processes. Silicon is abundant on Earth, but as it is more electropositive, it mainly forms Si–O bonds rather than Si–Si bonds. Boron does not react with acids and does not form chains naturally. Thus boron is not a candidate for life. Arsenic is toxic to life, and its possible candidacy has been rejected. In the past (1960s-1970s) other candidates for life were plausible, but with time and more research, only carbon as the complexity and stability for life, to make very large molecules, like polymers. Thus life must be carbon based.
Fiction
Speculations about the chemical structure and properties of hypothetical non-carbon-based life have been a recurring theme in science fiction. Silicon is often used as a substitute for carbon in fictional lifeforms because of its chemical similarities. In cinematic and literary science fiction, when man-made machines cross from non-living to living, this new form is often presented as an example of non-carbon-based life. Since the advent of the microprocessor in the late 1960s, such machines are often classed as "silicon-based life". Other examples of fictional "silicon-based life" can be seen in the 1967 episode "The Devil in the Dark" from Star Trek: The Original Series, in which a living rock creature's biochemistry is based on silicon. In the 1994 The X-Files episode "Firewalker", in which a silicon-based organism is discovered in a volcano.
In the 1984 film adaptation of Arthur C. Clarke's 1982 novel 2010: Odyssey Two, a character argues, "Whether we are based on carbon or on silicon makes no fundamental difference; we should each be treated with appropriate respect."
In JoJolion, the eighth part of the larger JoJo's Bizarre Adventure series, a mysterious race of silicon-based lifeforms "Rock Humans" serve as the primary antagonists.
Gallery
See also
Carbon source (biology)
Cell biology
CHONPS, a mnemonic acronym for the order of the most common elements in living organisms: carbon, hydrogen, oxygen, nitrogen, phosphorus, and sulfur
Habitable zone for complex life
References
External links
Astrobiology
Biology and pharmacology of chemical elements
Carbon
Life | 0.78095 | 0.990703 | 0.77369 |
Fad | A fad, trend, or craze is any form of collective behavior that develops within a culture, a generation or social group in which a group of people enthusiastically follow an impulse for a short time period.
Fads are objects or behaviors that achieve short-lived popularity but fade away. Fads are often seen as sudden, quick-spreading, and short-lived events. Fads include diets, clothing, hairstyles, toys, and more. Some popular fads throughout history are toys such as yo-yos, hula hoops, and fad dances such as the Macarena, floss and the twist.
Similar to habits or customs but less durable, fads often result from an activity or behavior being perceived as popular or exciting within a peer group, or being deemed "cool" as often promoted by social networks. A fad is said to "catch on" when the number of people adopting it begins to increase to the point of being noteworthy or going viral. Fads often fade quickly when the perception of novelty is gone.
Overview
The specific nature of the behavior associated with a fad can be of any type including unusual language usage, distinctive clothing, fad diets or frauds such as pyramid schemes. Apart from general novelty, mass marketing, emotional blackmail, peer pressure, or the desire to conformity may drive fads. Popular celebrities can also drive fads, for example the highly popularizing effect of Oprah's Book Club.
Though some consider the term trend equivalent to fad, a fad is generally considered a quick and short behavior whereas a trend is one that evolves into a long term or even permanent change.
Economics
In economics, the term is used in a similar way. Fads are mean-reverting deviations from intrinsic value caused by social or psychological forces similar to those that cause fashions in political philosophies or consumerisation.
Formation
Many contemporary fads share similar patterns of social organization. Several different models serve to examine fads and how they spread.
One way of looking at the spread of fads is through the top-down model, which argues that fashion is created for the elite, and from the elite, fashion spreads to lower classes. Early adopters might not necessarily be those of a high status, but they have sufficient resources that allow them to experiment with new innovations. When looking at the top-down model, sociologists like to highlight the role of selection. The elite might be the ones that introduce certain fads, but other people must choose to adopt those fads.
Others may argue that not all fads begin with their adopters. Social life already provides people with ideas that can help create a basis for new and innovative fads. Companies can look at what people are already interested in and create something from that information. The ideas behind fads are not always original; they might stem from what is already popular at the time. Recreation and style faddists may try out variations of a basic pattern or idea already in existence.
Another way of looking at the spread of fads is through a symbolic interaction view. People learn their behaviors from the people around them. When it comes to collective behavior, the emergence of these shared rules, meanings, and emotions are more dependent on the cues of the situation, rather than physiological arousal. This connection to symbolic interactionism, a theory that explains people's actions as being directed by shared meanings and assumptions, explains that fads are spread because people attach meaning and emotion to objects, and not because the object has practical use, for instance. People might adopt a fad because of the meanings and assumptions they share with the other people who have adopted that fad. People may join other adopters of the fad because they enjoy being a part of a group and what that symbolizes. Some people may join because they want to feel like an insider. When multiple people adopt the same fad, they may feel like they have made the right choice because other people have made that same choice.
Termination
Primarily, fads end because all innovative possibilities have been exhausted. Fads begin to fade when people no longer see them as new and unique. As more people follow the fad, some might start to see it as "overcrowded", and it no longer holds the same appeal. Many times, those who first adopt the fad also abandon it first. They begin to recognize that their preoccupation with the fad leads them to neglect some of their routine activities, and they realize the negative aspects of their behavior. Once the faddists are no longer producing new variations of the fad, people begin to realize their neglect of other activities, and the dangers of the fad. Not everyone completely abandons the fad, however, and parts may remain.
A study examined why certain fads die out quicker than others. A marketing professor at the University of Pennsylvania's Wharton School of Business, Jonah Berger and his colleague, Gael Le Mens, studied baby names in the United States and France to help explore the termination of fads. According to their results, the faster the names became popular, the faster they lost their popularity. They also found that the least successful names overall were those that caught on most quickly. Fads, like baby names, often lose their appeal just as quickly as they gained it.
Collective behavior
Fads can fit under the broad umbrella of collective behavior, which are behaviors engaged in by a large but loosely connected group of people. Other than fads, collective behavior includes the activities of people in crowds, panics, fashions, crazes, and more.
Robert E. Park, the man who created the term collective behavior, defined it as "the behavior of individuals under the influence of an impulse that is common and collective, an impulse, in other words, that is the result of social interaction". Fads are seen as impulsive, driven by emotions; however, they can bring together groups of people who may not have much in common other than their investment in the fad.
Collective obsession
Fads can also fit under the umbrella of "collective obsessions". Collective obsessions have three main features in common. The first, and most obvious sign, is an increase in frequency and intensity of a specific belief or behavior. A fad's popularity increases quickly in frequency and intensity, whereas a trend grows more slowly. The second is that the behavior is seen as ridiculous, irrational, or evil to the people who are not a part of the obsession. Some people might see those who follow certain fads as unreasonable and irrational. To these people, the fad is ridiculous, and people's obsession of it is just as ridiculous. The third is, after it has reached a peak, it drops off abruptly and then it is followed by a counter obsession. A counter obsession means that once the fad is over, if one engages in the fad they will be ridiculed. A fad's popularity often decreases at a rapid rate once its novelty wears off. Some people might start to criticize the fad after pointing out that it is no longer popular, so it must not have been "worth the hype".
See also
Bandwagon effect
:Category:Fads (notable fads through history)
Coolhunting
Crowd psychology
Google Trends
Hype
List of Internet phenomena
Market trend
Memetics
Peer pressure
Retro style
Social contagion
Social mania
Viral phenomenon
15 minutes of fame
Bellwether (1996 novel)
Notes
References
Best, Joel (2006). Flavor of the Month: Why Smart People Fall for Fads. University of California Press. .
Burke, Sarah. "5 Marketing Strategies, 1 Question: Fad or Trend?". Spokal.
Conley, Dalton (2015). You may ask yourself: An introduction to thinking like a sociologist. New York: W.W. Norton & Co. .
(review/summary)
Griffith, Benjamin (2013). "College Fads". St. James Encyclopedia of Popular Culture – via Gale Virtual Reference Library.
Heussner, Ki Mae. "7 Fads You Won't Forget". ABC News.
Killian, Lewis M.; Smelser, Neil J.; Turner, Ralph H. "Collective behavior". Encyclopædia Britannica.
External links
Popular culture
Crowd psychology
Types of IoT Security Devices | 0.776659 | 0.996156 | 0.773673 |
Physical geography | Physical geography (also known as physiography) is one of the three main branches of geography. Physical geography is the branch of natural science which deals with the processes and patterns in the natural environment such as the atmosphere, hydrosphere, biosphere, and geosphere. This focus is in contrast with the branch of human geography, which focuses on the built environment, and technical geography, which focuses on using, studying, and creating tools to obtain, analyze, interpret, and understand spatial information. The three branches have significant overlap, however.
Sub-branches
Physical geography can be divided into several branches or related fields, as follows:
Geomorphology is concerned with understanding the surface of the Earth and the processes by which it is shaped, both at the present as well as in the past. Geomorphology as a field has several sub-fields that deal with the specific landforms of various environments, e.g. desert geomorphology and fluvial geomorphology; however, these sub-fields are united by the core processes which cause them, mainly tectonic or climatic processes. Geomorphology seeks to understand landform history and dynamics, and predict future changes through a combination of field observation, physical experiment, and numerical modeling (Geomorphometry). Early studies in geomorphology are the foundation for pedology, one of two main branches of soil science.
Hydrology is predominantly concerned with the amounts and quality of water moving and accumulating on the land surface and in the soils and rocks near the surface and is typified by the hydrological cycle. Thus the field encompasses water in rivers, lakes, aquifers and to an extent glaciers, in which the field examines the process and dynamics involved in these bodies of water. Hydrology has historically had an important connection with engineering and has thus developed a largely quantitative method in its research; however, it does have an earth science side that embraces the systems approach. Similar to most fields of physical geography it has sub-fields that examine the specific bodies of water or their interaction with other spheres e.g. limnology and ecohydrology.
Glaciology is the study of glaciers and ice sheets, or more commonly the cryosphere or ice and phenomena that involve ice. Glaciology groups the latter (ice sheets) as continental glaciers and the former (glaciers) as alpine glaciers. Although research in the areas is similar to research undertaken into both the dynamics of ice sheets and glaciers, the former tends to be concerned with the interaction of ice sheets with the present climate and the latter with the impact of glaciers on the landscape. Glaciology also has a vast array of sub-fields examining the factors and processes involved in ice sheets and glaciers e.g. snow hydrology and glacial geology.
Biogeography is the science which deals with geographic patterns of species distribution and the processes that result in these patterns. Biogeography emerged as a field of study as a result of the work of Alfred Russel Wallace, although the field prior to the late twentieth century had largely been viewed as historic in its outlook and descriptive in its approach. The main stimulus for the field since its founding has been that of evolution, plate tectonics and the theory of island biogeography. The field can largely be divided into five sub-fields: island biogeography, paleobiogeography, phylogeography, zoogeography and phytogeography.
Climatology is the study of the climate, scientifically defined as weather conditions averaged over a long period of time. Climatology examines both the nature of micro (local) and macro (global) climates and the natural and anthropogenic influences on them. The field is also sub-divided largely into the climates of various regions and the study of specific phenomena or time periods e.g. tropical cyclone rainfall climatology and paleoclimatology.
Soil geography deals with the distribution of soils across the terrain. This discipline, between geography and soil science, is fundamental to both physical geography and pedology. Pedology is the study of soils in their natural environment. It deals with pedogenesis, soil morphology, soil classification. Soil geography studies the spatial distribution of soils as it relates to topography, climate (water, air, temperature), soil life (micro-organisms, plants, animals) and mineral materials within soils (biogeochemical cycles).
Palaeogeography is a cross-disciplinary study that examines the preserved material in the stratigraphic record to determine the distribution of the continents through geologic time. Almost all the evidence for the positions of the continents comes from geology in the form of fossils or paleomagnetism. The use of these data has resulted in evidence for continental drift, plate tectonics, and supercontinents. This, in turn, has supported palaeogeographic theories such as the Wilson cycle.
Coastal geography is the study of the dynamic interface between the ocean and the land, incorporating both the physical geography (i.e. coastal geomorphology, geology, and oceanography) and the human geography of the coast. It involves an understanding of coastal weathering processes, particularly wave action, sediment movement and weathering, and also the ways in which humans interact with the coast. Coastal geography, although predominantly geomorphological in its research, is not just concerned with coastal landforms, but also the causes and influences of sea level change.
Oceanography is the branch of physical geography that studies the Earth's oceans and seas. It covers a wide range of topics, including marine organisms and ecosystem dynamics (biological oceanography); ocean currents, waves, and geophysical fluid dynamics (physical oceanography); plate tectonics and the geology of the sea floor (geological oceanography); and fluxes of various chemical substances and physical properties within the ocean and across its boundaries (chemical oceanography). These diverse topics reflect multiple disciplines that oceanographers blend to further knowledge of the world ocean and understanding of processes within it.
Quaternary science is an interdisciplinary field of study focusing on the Quaternary period, which encompasses the last 2.6 million years. The field studies the last ice age and the recent interstadial the Holocene and uses proxy evidence to reconstruct the past environments during this period to infer the climatic and environmental changes that have occurred.
Landscape ecology is a sub-discipline of ecology and geography that address how spatial variation in the landscape affects ecological processes such as the distribution and flow of energy, materials, and individuals in the environment (which, in turn, may influence the distribution of landscape "elements" themselves such as hedgerows). The field was largely funded by the German geographer Carl Troll. Landscape ecology typically deals with problems in an applied and holistic context. The main difference between biogeography and landscape ecology is that the latter is concerned with how flows or energy and material are changed and their impacts on the landscape whereas the former is concerned with the spatial patterns of species and chemical cycles.
Geomatics is the field of gathering, storing, processing, and delivering geographic information, or spatially referenced information. Geomatics includes geodesy (scientific discipline that deals with the measurement and representation of the earth, its gravitational field, and other geodynamic phenomena, such as crustal motion, oceanic tides, and polar motion), cartography, geographical information science (GIS) and remote sensing (the short or large-scale acquisition of information of an object or phenomenon, by the use of either recording or real-time sensing devices that are not in physical or intimate contact with the object).
Environmental geography is a branch of geography that analyzes the spatial aspects of interactions between humans and the natural world. The branch bridges the divide between human and physical geography and thus requires an understanding of the dynamics of geology, meteorology, hydrology, biogeography, and geomorphology, as well as the ways in which human societies conceptualize the environment. Although the branch was previously more visible in research than at present with theories such as environmental determinism linking society with the environment. It has largely become the domain of the study of environmental management or anthropogenic influences.
Journals and literature
Main category: Geography Journals
Mental geography and earth science journals communicate and document the results of research carried out in universities and various other research institutions. Most journals cover a specific publish the research within that field, however unlike human geographers, physical geographers tend to publish in inter-disciplinary journals rather than predominantly geography journal; the research is normally expressed in the form of a scientific paper. Additionally, textbooks, books, and communicate research to laypeople, although these tend to focus on environmental issues or cultural dilemmas. Examples of journals that publish articles from physical geographers are:
Historical evolution of the discipline
From the birth of geography as a science during the Greek classical period and until the late nineteenth century with the birth of anthropogeography (human geography), geography was almost exclusively a natural science: the study of location and descriptive gazetteer of all places of the known world. Several works among the best known during this long period could be cited as an example, from Strabo (Geography), Eratosthenes (Geographika) or Dionysius Periegetes (Periegesis Oiceumene) in the Ancient Age. In more modern times, these works include the Alexander von Humboldt (Kosmos) in the nineteenth century, in which geography is regarded as a physical and natural science through the work Summa de Geografía of Martín Fernández de Enciso from the early sixteenth century, which indicated for the first time the New World.
During the eighteenth and nineteenth centuries, a controversy exported from geology, between supporters of James Hutton (uniformitarianism thesis) and Georges Cuvier (catastrophism) strongly influenced the field of geography, because geography at this time was a natural science.
Two historical events during the nineteenth century had a great effect on the further development of physical geography. The first was the European colonial expansion in Asia, Africa, Australia and even America in search of raw materials required by industries during the Industrial Revolution. This fostered the creation of geography departments in the universities of the colonial powers and the birth and development of national geographical societies, thus giving rise to the process identified by Horacio Capel as the institutionalization of geography.
The exploration of Siberia is an example. In the mid-eighteenth century, many geographers were sent to perform geographical surveys in the area of Arctic Siberia. Among these is who is considered the patriarch of Russian geography, Mikhail Lomonosov. In the mid-1750s Lomonosov began working in the Department of Geography, Academy of Sciences to conduct research in Siberia. They showed the organic origin of soil and developed a comprehensive law on the movement of the ice, thereby founding a new branch of geography: glaciology. In 1755 on his initiative was founded Moscow University where he promoted the study of geography and the training of geographers. In 1758 he was appointed director of the Department of Geography, Academy of Sciences, a post from which would develop a working methodology for geographical survey guided by the most important long expeditions and geographical studies in Russia.
The contributions of the Russian school became more frequent through his disciples, and in the nineteenth century we have great geographers such as Vasily Dokuchaev who performed works of great importance as a "principle of comprehensive analysis of the territory" and "Russian Chernozem". In the latter, he introduced the geographical concept of soil, as distinct from a simple geological stratum, and thus found a new geographic area of study: pedology. Climatology also received a strong boost from the Russian school by Wladimir Köppen whose main contribution, climate classification, is still valid today. However, this great geographer also contributed to the paleogeography through his work "The climates of the geological past" which is considered the father of paleoclimatology. Russian geographers who made great contributions to the discipline in this period were: NM Sibirtsev, Pyotr Semyonov, K.D. Glinka, Neustrayev, among others.
The second important process is the theory of evolution by Darwin in mid-century (which decisively influenced the work of Friedrich Ratzel, who had academic training as a zoologist and was a follower of Darwin's ideas) which meant an important impetus in the development of Biogeography.
Another major event in the late nineteenth and early twentieth centuries took place in the United States. William Morris Davis not only made important contributions to the establishment of discipline in his country but revolutionized the field to develop cycle of erosion theory which he proposed as a paradigm for geography in general, although in actually served as a paradigm for physical geography. His theory explained that mountains and other landforms are shaped by factors that are manifested cyclically. He explained that the cycle begins with the lifting of the relief by geological processes (faults, volcanism, tectonic upheaval, etc.). Factors such as rivers and runoff begin to create V-shaped valleys between the mountains (the stage called "youth"). During this first stage, the terrain is steeper and more irregular. Over time, the currents can carve wider valleys ("maturity") and then start to wind, towering hills only ("senescence"). Finally, everything comes to what is a plain flat plain at the lowest elevation possible (called "baseline") This plain was called by Davis' "peneplain" meaning "almost plain" Then river rejuvenation occurs and there is another mountain lift and the cycle continues.
Although Davis's theory is not entirely accurate, it was absolutely revolutionary and unique in its time and helped to modernize and create a geography subfield of geomorphology. Its implications prompted a myriad of research in various branches of physical geography. In the case of the Paleogeography, this theory provided a model for understanding the evolution of the landscape. For hydrology, glaciology, and climatology as a boost investigated as studying geographic factors shape the landscape and affect the cycle. The bulk of the work of William Morris Davis led to the development of a new branch of physical geography: Geomorphology whose contents until then did not differ from the rest of geography. Shortly after this branch would present a major development. Some of his disciples made significant contributions to various branches of physical geography such as Curtis Marbut and his invaluable legacy for Pedology, Mark Jefferson, Isaiah Bowman, among others.
Notable physical geographers
Eratosthenes (276194 BC) who invented the discipline of geography. He made the first known reliable estimation of the Earth's size. He is considered the father of mathematical geography and geodesy.
Ptolemy (c. 90c. 168), who compiled Greek and Roman knowledge to produce the book Geographia.
Abū Rayhān Bīrūnī (9731048 AD), considered the father of geodesy.
Ibn Sina (Avicenna, 980–1037), who formulated the law of superposition and concept of uniformitarianism in Kitāb al-Šifāʾ (also called The Book of Healing).
Muhammad al-Idrisi (Dreses, 1100), who drew the Tabula Rogeriana, the most accurate world map in pre-modern times.
Piri Reis (1465c. 1554), whose Piri Reis map is the oldest surviving world map to include the Americas and possibly Antarctica
Gerardus Mercator (1512–1594), an innovative cartographer and originator of the Mercator projection.
Bernhardus Varenius (1622–1650), Wrote his important work "General Geography" (1650), first overview of the geography, the foundation of modern geography.
Mikhail Lomonosov (1711–1765), father of Russian geography and founded the study of glaciology.
Alexander von Humboldt (1769–1859), considered the father of modern geography. Published Cosmos and founded the study of biogeography.
Arnold Henry Guyot (1807–1884), who noted the structure of glaciers and advanced the understanding of glacial motion, especially in fast ice flow.
Louis Agassiz (1807–1873), the author of a glacial theory which disputed the notion of a steady-cooling Earth.
Alfred Russel Wallace (1823–1913), founder of modern biogeography and the Wallace line.
Vasily Dokuchaev (1840–1903), patriarch of Russian geography and founder of pedology.
Wladimir Peter Köppen (1846–1940), developer of most important climate classification and founder of Paleoclimatology.
William Morris Davis (1850–1934), father of American geography, founder of Geomorphology and developer of the geographical cycle theory.
John Francon Williams FRGS (1854-1911), wrote his seminal work Geography of the Oceans published in 1881.
Walther Penck (1888–1923), proponent of the cycle of erosion and the simultaneous occurrence of uplift and denudation.
Sir Ernest Shackleton (1874–1922), Antarctic explorer during the Heroic Age of Antarctic Exploration.
Robert E. Horton (1875–1945), founder of modern hydrology and concepts such as infiltration capacity and overland flow.
J Harlen Bretz (1882–1981), pioneer of research into the shaping of landscapes by catastrophic floods, most notably the Bretz (Missoula) floods.
Luis García Sáinz (1894–1965), pioneer of physical geography in Spain.
Willi Dansgaard (1922–2011), palaeoclimatologist and quaternary scientist, instrumental in the use of oxygen-isotope dating and co-identifier of Dansgaard-Oeschger events.
Hans Oeschger (1927–1998), palaeoclimatologist and pioneer in ice core research, co-identifier of Dansgaard-Orschger events.
Richard Chorley (1927–2002), a key contributor to the quantitative revolution and the use of systems theory in geography.
Sir Nicholas Shackleton (1937–2006), who demonstrated that oscillations in climate over the past few million years could be correlated with variations in the orbital and positional relationship between the Earth and the Sun.
See also
Areography
Atmosphere of Earth
Concepts and Techniques in Modern Geography
Earth system science
Environmental science
Environmental studies
Geographic information science
Geographic information system
Geophysics
Geostatistics
Global Positioning System
Planetary science
Physiographic regions of the world
Selenography
Technical geography
References
Further reading
Pidwirny, Michael. (2014). Glossary of Terms for Physical Geography. Planet Earth Publishing, Kelowna, Canada. . Available on Google Play.
Pidwirny, Michael. (2014). Understanding Physical Geography. Planet Earth Publishing, Kelowna, Canada. . Available on Google Play.
Reynolds, Stephen J. et al. (2015). Exploring Physical Geography. [A Visual Textbook, Featuring more than 2500 Photographies & Illustrations]. McGraw-Hill Education, New York.
External links
Physiography by T.X. Huxley, 1878, full text, physical geography of the Thames River Basin
Fundamentals of Physical Geography, 2nd Edition, by M. Pidwirny, 2006, full text
Physical Geography for Students and Teachers, UK National Grid For Learning
Earth sciences | 0.775472 | 0.997665 | 0.773661 |
Protein–protein interaction | Protein–protein interactions (PPIs) are physical contacts of high specificity established between two or more protein molecules as a result of biochemical events steered by interactions that include electrostatic forces, hydrogen bonding and the hydrophobic effect. Many are physical contacts with molecular associations between chains that occur in a cell or in a living organism in a specific biomolecular context.
Proteins rarely act alone as their functions tend to be regulated. Many molecular processes within a cell are carried out by molecular machines that are built from numerous protein components organized by their PPIs. These physiological interactions make up the so-called interactomics of the organism, while aberrant PPIs are the basis of multiple aggregation-related diseases, such as Creutzfeldt–Jakob and Alzheimer's diseases.
PPIs have been studied with many methods and from different perspectives: biochemistry, quantum chemistry, molecular dynamics, signal transduction, among others. All this information enables the creation of large protein interaction networks – similar to metabolic or genetic/epigenetic networks – that empower the current knowledge on biochemical cascades and molecular etiology of disease, as well as the discovery of putative protein targets of therapeutic interest.
Examples
Electron transfer proteins
In many metabolic reactions, a protein that acts as an electron carrier binds to an enzyme that acts as its reductase. After it receives an electron, it dissociates and then binds to the next enzyme that acts as its oxidase (i.e. an acceptor of the electron). These interactions between proteins are dependent on highly specific binding between proteins to ensure efficient electron transfer. Examples: mitochondrial oxidative phosphorylation chain system components cytochrome c-reductase / cytochrome c / cytochrome c oxidase; microsomal and mitochondrial P450 systems.
In the case of the mitochondrial P450 systems, the specific residues involved in the binding of the electron transfer protein adrenodoxin to its reductase were identified as two basic Arg residues on the surface of the reductase and two acidic Asp residues on the adrenodoxin.
More recent work on the phylogeny of the reductase has shown that these residues involved in protein–protein interactions have been conserved throughout the evolution of this enzyme.
Signal transduction
The activity of the cell is regulated by extracellular signals. Signal propagation inside and/or along the interior of cells depends on PPIs between the various signaling molecules. The recruitment of signaling pathways through PPIs is called signal transduction and plays a fundamental role in many biological processes and in many diseases including Parkinson's disease and cancer.
Membrane transport
A protein may be carrying another protein (for example, from cytoplasm to nucleus or vice versa in the case of the nuclear pore importins).
Cell metabolism
In many biosynthetic processes enzymes interact with each other to produce small compounds or other macromolecules.
Muscle contraction
Physiology of muscle contraction involves several interactions. Myosin filaments act as molecular motors and by binding to actin enables filament sliding. Furthermore, members of the skeletal muscle lipid droplet-associated proteins family associate with other proteins, as activator of adipose triglyceride lipase and its coactivator comparative gene identification-58, to regulate lipolysis in skeletal muscle
Types
To describe the types of protein–protein interactions (PPIs) it is important to consider that proteins can interact in a "transient" way (to produce some specific effect in a short time, like signal transduction) or to interact with other proteins in a "stable" way to form complexes that become molecular machines within the living systems. A protein complex assembly can result in the formation of homo-oligomeric or hetero-oligomeric complexes. In addition to the conventional complexes, as enzyme-inhibitor and antibody-antigen, interactions can also be established between domain-domain and domain-peptide. Another important distinction to identify protein–protein interactions is the way they have been determined, since there are techniques that measure direct physical interactions between protein pairs, named “binary” methods, while there are other techniques that measure physical interactions among groups of proteins, without pairwise determination of protein partners, named “co-complex” methods.
Homo-oligomers vs. hetero-oligomers
Homo-oligomers are macromolecular complexes constituted by only one type of protein subunit. Protein subunits assembly is guided by the establishment of non-covalent interactions in the quaternary structure of the protein. Disruption of homo-oligomers in order to return to the initial individual monomers often requires denaturation of the complex. Several enzymes, carrier proteins, scaffolding proteins, and transcriptional regulatory factors carry out their functions as homo-oligomers.
Distinct protein subunits interact in hetero-oligomers, which are essential to control several cellular functions. The importance of the communication between heterologous proteins is even more evident during cell signaling events and such interactions are only possible due to structural domains within the proteins (as described below).
Stable interactions vs. transient interactions
Stable interactions involve proteins that interact for a long time, taking part of permanent complexes as subunits, in order to carry out functional roles. These are usually the case of homo-oligomers (e.g. cytochrome c), and some hetero-oligomeric proteins, as the subunits of ATPase. On the other hand, a protein may interact briefly and in a reversible manner with other proteins in only certain cellular contexts – cell type, cell cycle stage, external factors, presence of other binding proteins, etc. – as it happens with most of the proteins involved in biochemical cascades. These are called transient interactions. For example, some G protein–coupled receptors only transiently bind to Gi/o proteins when they are activated by extracellular ligands, while some Gq-coupled receptors, such as muscarinic receptor M3, pre-couple with Gq proteins prior to the receptor-ligand binding. Interactions between intrinsically disordered protein regions to globular protein domains (i.e. MoRFs) are transient interactions.
Covalent vs. non-covalent
Covalent interactions are those with the strongest association and are formed by disulphide bonds or electron sharing. While rare, these interactions are determinant in some posttranslational modifications, as ubiquitination and SUMOylation. Non-covalent bonds are usually established during transient interactions by the combination of weaker bonds, such as hydrogen bonds, ionic interactions, Van der Waals forces, or hydrophobic bonds.
Role of water
Water molecules play a significant role in the interactions between proteins. The crystal structures of complexes, obtained at high resolution from different but homologous proteins, have shown that some interface water molecules are conserved between homologous complexes. The majority of the interface water molecules make hydrogen bonds with both partners of each complex. Some interface amino acid residues or atomic groups of one protein partner engage in both direct and water mediated interactions with the other protein partner. Doubly indirect interactions, mediated by two water molecules, are more numerous in the homologous complexes of low affinity. Carefully conducted mutagenesis experiments, e.g. changing a tyrosine residue into a phenylalanine, have shown that water mediated interactions can contribute to the energy of interaction. Thus, water molecules may facilitate the interactions and cross-recognitions between proteins.
Structure
The molecular structures of many protein complexes have been unlocked by the technique of X-ray crystallography. The first structure to be solved by this method was that of sperm whale myoglobin by Sir John Cowdery Kendrew. In this technique the angles and intensities of a beam of X-rays diffracted by crystalline atoms are detected in a film, thus producing a three-dimensional picture of the density of electrons within the crystal.
Later, nuclear magnetic resonance also started to be applied with the aim of unravelling the molecular structure of protein complexes. One of the first examples was the structure of calmodulin-binding domains bound to calmodulin. This technique is based on the study of magnetic properties of atomic nuclei, thus determining physical and chemical properties of the correspondent atoms or the molecules. Nuclear magnetic resonance is advantageous for characterizing weak PPIs.
Protein-protein interaction domains
Some proteins have specific structural domains or sequence motifs that provide binding to other proteins. Here are some examples of such domains:
Src homology 2 (SH2) domain
SH2 domains are structurally composed by three-stranded twisted beta sheet sandwiched flanked by two alpha-helices. The existence of a deep binding pocket with high affinity for phosphotyrosine, but not for phosphoserine or phosphothreonine, is essential for the recognition of tyrosine phosphorylated proteins, mainly autophosphorylated growth factor receptors. Growth factor receptor binding proteins and phospholipase Cγ are examples of proteins that have SH2 domains.
Src homology 3 (SH3) domain
Structurally, SH3 domains are constituted by a beta barrel formed by two orthogonal beta sheets and three anti-parallel beta strands. These domains recognize proline enriched sequences, as polyproline type II helical structure (PXXP motifs) in cell signaling proteins like protein tyrosine kinases and the growth factor receptor bound protein 2 (Grb2).
Phosphotyrosine-binding (PTB) domain
PTB domains interact with sequences that contain a phosphotyrosine group. These domains can be found in the insulin receptor substrate.
LIM domain
LIM domains were initially identified in three homeodomain transcription factors (lin11, is11, and mec3). In addition to this homeodomain proteins and other proteins involved in development, LIM domains have also been identified in non-homeodomain proteins with relevant roles in cellular differentiation, association with cytoskeleton and senescence. These domains contain a tandem cysteine-rich Zn2+-finger motif and embrace the consensus sequence CX2CX16-23HX2CX2CX2CX16-21CX2C/H/D. LIM domains bind to PDZ domains, bHLH transcription factors, and other LIM domains.
Sterile alpha motif (SAM) domain
SAM domains are composed by five helices forming a compact package with a conserved hydrophobic core. These domains, which can be found in the Eph receptor and the stromal interaction molecule (STIM) for example, bind to non-SAM domain-containing proteins and they also appear to have the ability to bind RNA.
PDZ domain
PDZ domains were first identified in three guanylate kinases: PSD-95, DlgA and ZO-1. These domains recognize carboxy-terminal tri-peptide motifs (S/TXV), other PDZ domains or LIM domains and bind them through a short peptide sequence that has a C-terminal hydrophobic residue. Some of the proteins identified as having PDZ domains are scaffolding proteins or seem to be involved in ion receptor assembling and receptor-enzyme complexes formation.
FERM domain
FERM domains contain basic residues capable of binding PtdIns(4,5)P2. Talin and focal adhesion kinase (FAK) are two of the proteins that present FERM domains.
Calponin homology (CH) domain
CH domains are mainly present in cytoskeletal proteins as parvin.
Pleckstrin homology domain
Pleckstrin homology domains bind to phosphoinositides and acid domains in signaling proteins.
WW domain
WW domains bind to proline enriched sequences.
WSxWS motif
Found in cytokine receptors
Properties of the interface
The study of the molecular structure can give fine details about the interface that enables the interaction between proteins. When characterizing PPI interfaces it is important to take into account the type of complex.
Parameters evaluated include size (measured in absolute dimensions Å2 or in solvent-accessible surface area (SASA)), shape, complementarity between surfaces, residue interface propensities, hydrophobicity, segmentation and secondary structure, and conformational changes on complex formation.
The great majority of PPI interfaces reflects the composition of protein surfaces, rather than the protein cores, in spite of being frequently enriched in hydrophobic residues, particularly in aromatic residues. PPI interfaces are dynamic and frequently planar, although they can be globular and protruding as well. Based on three structures – insulin dimer, trypsin-pancreatic trypsin inhibitor complex, and oxyhaemoglobin – Cyrus Chothia and Joel Janin found that between 1,130 and 1,720 Å2 of surface area was removed from contact with water indicating that hydrophobicity is a major factor of stabilization of PPIs. Later studies refined the buried surface area of the majority of interactions to 1,600±350 Å2. However, much larger interaction interfaces were also observed and were associated with significant changes in conformation of one of the interaction partners. PPIs interfaces exhibit both shape and electrostatic complementarity.
Regulation
Protein concentration, which in turn are affected by expression levels and degradation rates;
Protein affinity for proteins or other binding ligands;
Ligands concentrations (substrates, ions, etc.);
Presence of other proteins, nucleic acids, and ions;
Electric fields around proteins.
Occurrence of covalent modifications;
Experimental methods
There are a multitude of methods to detect them. Each of the approaches has its own strengths and weaknesses, especially with regard to the sensitivity and specificity of the method. The most conventional and widely used high-throughput methods are yeast two-hybrid screening and affinity purification coupled to mass spectrometry.
Yeast two-hybrid screening
This system was firstly described in 1989 by Fields and Song using Saccharomyces cerevisiae as biological model. Yeast two hybrid allows the identification of pairwise PPIs (binary method) in vivo, in which the two proteins are tested for biophysically direct interaction. The Y2H is based on the functional reconstitution of the yeast transcription factor Gal4 and subsequent activation of a selective reporter such as His3. To test two proteins for interaction, two protein expression constructs are made: one protein (X) is fused to the Gal4 DNA-binding domain (DB) and a second protein (Y) is fused to the Gal4 activation domain (AD). In the assay, yeast cells are transformed with these constructs. Transcription of reporter genes does not occur unless bait (DB-X) and prey (AD-Y) interact with each other and form a functional Gal4 transcription factor. Thus, the interaction between proteins can be inferred by the presence of the products resultant of the reporter gene expression. In cases in which the reporter gene expresses enzymes that allow the yeast to synthesize essential amino acids or nucleotides, yeast growth under selective media conditions indicates that the two proteins tested are interacting. Recently, software to detect and prioritize protein interactions was published.
Despite its usefulness, the yeast two-hybrid system has limitations. It uses yeast as main host system, which can be a problem when studying proteins that contain mammalian-specific post-translational modifications. The number of PPIs identified is usually low because of a high false negative rate; and, understates membrane proteins, for example.
In initial studies that utilized Y2H, proper controls for false positives (e.g. when DB-X activates the reporter gene without the presence of AD-Y) were frequently not done, leading to a higher than normal false positive rate. An empirical framework must be implemented to control for these false positives. Limitations in lower coverage of membrane proteins have been overcoming by the emergence of yeast two-hybrid variants, such as the membrane yeast two-hybrid (MYTH) and the split-ubiquitin system, which are not limited to interactions that occur in the nucleus; and, the bacterial two-hybrid system, performed in bacteria;
Affinity purification coupled to mass spectrometry
Affinity purification coupled to mass spectrometry mostly detects stable interactions and thus better indicates functional in vivo PPIs. This method starts by purification of the tagged protein, which is expressed in the cell usually at in vivo concentrations, and its interacting proteins (affinity purification). One of the most advantageous and widely used methods to purify proteins with very low contaminating background is the tandem affinity purification, developed by Bertrand Seraphin and Matthias Mann and respective colleagues. PPIs can then be quantitatively and qualitatively analysed by mass spectrometry using different methods: chemical incorporation, biological or metabolic incorporation (SILAC), and label-free methods. Furthermore, network theory has been used to study the whole set of identified protein–protein interactions in cells.
Nucleic acid programmable protein array (NAPPA)
This system was first developed by LaBaer and colleagues in 2004 by using in vitro transcription and translation system. They use DNA template encoding the gene of interest fused with GST protein, and it was immobilized in the solid surface. Anti-GST antibody and biotinylated plasmid DNA were bounded in aminopropyltriethoxysilane (APTES)-coated slide. BSA can improve the binding efficiency of DNA. Biotinylated plasmid DNA was bound by avidin. New protein was synthesized by using cell-free expression system i.e. rabbit reticulocyte lysate (RRL), and then the new protein was captured through anti-GST antibody bounded on the slide. To test protein–protein interaction, the targeted protein cDNA and query protein cDNA were immobilized in a same coated slide. By using in vitro transcription and translation system, targeted and query protein was synthesized by the same extract. The targeted protein was bound to array by antibody coated in the slide and query protein was used to probe the array. The query protein was tagged with hemagglutinin (HA) epitope. Thus, the interaction between the two proteins was visualized with the antibody against HA.
Intragenic complementation
When multiple copies of a polypeptide encoded by a gene form a complex, this protein structure is referred to as a multimer. When a multimer is formed from polypeptides produced by two different mutant alleles of a particular gene, the mixed multimer may exhibit greater functional activity than the unmixed multimers formed by each of the mutants alone. In such a case, the phenomenon is referred to as intragenic complementation (also called inter-allelic complementation). Intragenic complementation has been demonstrated in many different genes in a variety of organisms including the fungi Neurospora crassa, Saccharomyces cerevisiae and Schizosaccharomyces pombe; the bacterium Salmonella typhimurium; the virus bacteriophage T4, an RNA virus and humans. In such studies, numerous mutations defective in the same gene were often isolated and mapped in a linear order on the basis of recombination frequencies to form a genetic map of the gene. Separately, the mutants were tested in pairwise combinations to measure complementation. An analysis of the results from such studies led to the conclusion that intragenic complementation, in general, arises from the interaction of differently defective polypeptide monomers to form a multimer. Genes that encode multimer-forming polypeptides appear to be common. One interpretation of the data is that polypeptide monomers are often aligned in the multimer in such a way that mutant polypeptides defective at nearby sites in the genetic map tend to form a mixed multimer that functions poorly, whereas mutant polypeptides defective at distant sites tend to form a mixed multimer that functions more effectively. Direct interaction of two nascent proteins emerging from nearby ribosomes appears to be a general mechanism for homo-oligomer (multimer) formation. Hundreds of protein oligomers were identified that assemble in human cells by such an interaction. The most prevalent form of interaction is between the N-terminal regions of the interacting proteins. Dimer formation appears to be able to occur independently of dedicated assembly machines. The intermolecular forces likely responsible for self-recognition and multimer formation were discussed by Jehle.
Other potential methods
Diverse techniques to identify PPIs have been emerging along with technology progression. These include co-immunoprecipitation, protein microarrays, analytical ultracentrifugation, light scattering, fluorescence spectroscopy, luminescence-based mammalian interactome mapping (LUMIER), resonance-energy transfer systems, mammalian protein–protein interaction trap, electro-switchable biosurfaces, protein–fragment complementation assay, as well as real-time label-free measurements by surface plasmon resonance, and calorimetry.
Computational methods
Computational prediction of protein–protein interactions
The experimental detection and characterization of PPIs is labor-intensive and time-consuming. However, many PPIs can be also predicted computationally, usually using experimental data as a starting point. However, methods have also been developed that allow the prediction of PPI de novo, that is without prior evidence for these interactions.
Genomic context methods
The Rosetta Stone or Domain Fusion method is based on the hypothesis that interacting proteins are sometimes fused into a single protein in another genome. Therefore, we can predict if two proteins may be interacting by determining if they each have non-overlapping sequence similarity to a region of a single protein sequence in another genome.
The Conserved Neighborhood method is based on the hypothesis that if genes encoding two proteins are neighbors on a chromosome in many genomes, then they are likely functionally related (and possibly physically interacting).
The Phylogenetic Profile method is based on the hypothesis that if two or more proteins are concurrently present or absent across several genomes, then they are likely functionally related. Therefore, potentially interacting proteins can be identified by determining the presence or absence of genes across many genomes and selecting those genes which are always present or absent together.
Text mining methods
Publicly available information from biomedical documents is readily accessible through the internet and is becoming a powerful resource for collecting known protein–protein interactions (PPIs), PPI prediction and protein docking. Text mining is much less costly and time-consuming compared to other high-throughput techniques. Currently, text mining methods generally detect binary relations between interacting proteins from individual sentences using rule/pattern-based information extraction and machine learning approaches. A wide variety of text mining applications for PPI extraction and/or prediction are available for public use, as well as repositories which often store manually validated and/or computationally predicted PPIs. Text mining can be implemented in two stages: information retrieval, where texts containing names of either or both interacting proteins are retrieved and information extraction, where targeted information (interacting proteins, implicated residues, interaction types, etc.) is extracted.
There are also studies using phylogenetic profiling, basing their functionalities on the theory that proteins involved in common pathways co-evolve in a correlated fashion across species. Some more complex text mining methodologies use advanced Natural Language Processing (NLP) techniques and build knowledge networks (for example, considering gene names as nodes and verbs as edges). Other developments involve kernel methods to predict protein interactions.
Machine learning methods
Many computational methods have been suggested and reviewed for predicting protein–protein interactions. Prediction approaches can be grouped into categories based on predictive evidence: protein sequence, comparative genomics, protein domains, protein tertiary structure, and interaction network topology. The construction of a positive set (known interacting protein pairs) and a negative set (non-interacting protein pairs) is needed for the development of a computational prediction model. Prediction models using machine learning techniques can be broadly classified into two main groups: supervised and unsupervised, based on the labeling of input variables according to the expected outcome.
In 2005, integral membrane proteins of Saccharomyces cerevisiae were analyzed using the mating-based ubiquitin system (mbSUS). The system detects membrane proteins interactions with extracellular signaling proteins Of the 705 integral membrane proteins 1,985 different interactions were traced that involved 536 proteins. To sort and classify interactions a support vector machine was used to define high medium and low confidence interactions. The split-ubiquitin membrane yeast two-hybrid system uses transcriptional reporters to identify yeast transformants that encode pairs of interacting proteins.
In 2006, random forest, an example of a supervised technique, was found to be the most-effective machine learning method for protein interaction prediction. Such methods have been applied for discovering protein interactions on human interactome, specifically the interactome of Membrane proteins and the interactome of Schizophrenia-associated proteins.
As of 2020, a model using residue cluster classes (RCCs), constructed from the 3DID and Negatome databases, resulted in 96-99% correctly classified instances of protein–protein interactions. RCCs are a computational vector space that mimics protein fold space and includes all simultaneously contacted residue sets, which can be used to analyze protein structure-function relation and evolution.
Databases
Large scale identification of PPIs generated hundreds of thousands of interactions, which were collected together in specialized biological databases that are continuously updated in order to provide complete interactomes. The first of these databases was the Database of Interacting Proteins (DIP).
Primary databases collect information about published PPIs proven to exist via small-scale or large-scale experimental methods. Examples: DIP, Biomolecular Interaction Network Database (BIND), Biological General Repository for Interaction Datasets (BioGRID), Human Protein Reference Database (HPRD), IntAct Molecular Interaction Database, Molecular Interactions Database (MINT), MIPS Protein Interaction Resource on Yeast (MIPS-MPact), and MIPS Mammalian Protein–Protein Interaction Database (MIPS-MPPI).<
Meta-databases normally result from the integration of primary databases information, but can also collect some original data.
Prediction databases include many PPIs that are predicted using several techniques (main article). Examples: Human Protein–Protein Interaction Prediction Database (PIPs), Interlogous Interaction Database (I2D), Known and Predicted Protein–Protein Interactions (STRING-db), and Unified Human Interactive (UniHI).
The aforementioned computational methods all depend on source databases whose data can be extrapolated to predict novel protein–protein interactions. Coverage differs greatly between databases. In general, primary databases have the fewest total protein interactions recorded as they do not integrate data from multiple other databases, while prediction databases have the most because they include other forms of evidence in addition to experimental. For example, the primary database IntAct has 572,063 interactions, the meta-database APID has 678,000 interactions, and the predictive database STRING has 25,914,693 interactions. However, it is important to note that some of the interactions in the STRING database are only predicted by computational methods such as Genomic Context and not experimentally verified.
Interaction networks
Information found in PPIs databases supports the construction of interaction networks. Although the PPI network of a given query protein can be represented in textbooks, diagrams of whole cell PPIs are frankly complex and difficult to generate.
One example of a manually produced molecular interaction map is the Kurt Kohn's 1999 map of cell cycle control. Drawing on Kohn's map, Schwikowski et al. in 2000 published a paper on PPIs in yeast, linking 1,548 interacting proteins determined by two-hybrid screening. They used a layered graph drawing method to find an initial placement of the nodes and then improved the layout using a force-based algorithm.
Bioinformatic tools have been developed to simplify the difficult task of visualizing molecular interaction networks and complement them with other types of data. For instance, Cytoscape is an open-source software widely used and many plugins are currently available. Pajek software is advantageous for the visualization and analysis of very large networks.
Identification of functional modules in PPI networks is an important challenge in bioinformatics. Functional modules means a set of proteins that are highly connected to each other in PPI network. It is almost similar problem as community detection in social networks. There are some methods such as Jactive modules and MoBaS. Jactive modules integrate PPI network and gene expression data where as MoBaS integrate PPI network and Genome Wide association Studies.
protein–protein relationships are often the result of multiple types of interactions or are deduced from different approaches, including co-localization, direct interaction, suppressive genetic interaction, additive genetic interaction, physical association, and other associations.
Signed interaction networks
Protein–protein interactions often result in one of the interacting proteins either being 'activated' or 'repressed'. Such effects can be indicated in a PPI network by "signs" (e.g. "activation" or "inhibition"). Although such attributes have been added to networks for a long time, Vinayagam et al. (2014) coined the term Signed network for them. Signed networks are often expressed by labeling the interaction as either positive or negative. A positive interaction is one where the interaction results in one of the proteins being activated. Conversely, a negative interaction indicates that one of the proteins being inactivated.
Protein–protein interaction networks are often constructed as a result of lab experiments such as yeast two-hybrid screens or 'affinity purification and subsequent mass spectrometry techniques. However these methods do not provide the layer of information needed in order to determine what type of interaction is present in order to be able to attribute signs to the network diagrams.
RNA interference screens
RNA interference (RNAi) screens (repression of individual proteins between transcription and translation) are one method that can be utilized in the process of providing signs to the protein–protein interactions. Individual proteins are repressed and the resulting phenotypes are analyzed. A correlating phenotypic relationship (i.e. where the inhibition of either of two proteins results in the same phenotype) indicates a positive, or activating relationship. Phenotypes that do not correlate (i.e. where the inhibition of either of two proteins results in two different phenotypes) indicate a negative or inactivating relationship. If protein A is dependent on protein B for activation then the inhibition of either protein A or B will result in a cell losing the service that is provided by protein A and the phenotypes will be the same for the inhibition of either A or B. If, however, protein A is inactivated by protein B then the phenotypes will differ depending on which protein is inhibited (inhibit protein B and it can no longer inactivate protein A leaving A active however inactivate A and there is nothing for B to activate since A is inactive and the phenotype changes). Multiple RNAi screens need to be performed in order to reliably appoint a sign to a given protein–protein interaction. Vinayagam et al. who devised this technique state that a minimum of nine RNAi screens are required with confidence increasing as one carries out more screens.
As therapeutic targets
Modulation of PPI is challenging and is receiving increasing attention by the scientific community. Several properties of PPI such as allosteric sites and hotspots, have been incorporated into drug-design strategies. Nevertheless, very few PPIs are directly targeted by FDA-approved small-molecule PPI inhibitors, emphasizing a huge untapped opportunity for drug discovery.
In 2014, Amit Jaiswal and others were able to develop 30 peptides to inhibit recruitment of telomerase towards telomeres by utilizing protein–protein interaction studies. Arkin and others were able to develop antibody fragment-based inhibitors to regulate specific protein-protein interactions.
As the "modulation" of PPIs not only includes the inhibition, but also the stabilization of quaternary protein complexes, molecules with this mechanism of action (so called molecular glues) are also intensively studied.
Examples
Tirobifan, inhibitor of the glycoprotein IIb/IIIa, used as a cardiovascular drug
Maraviroc, inhibitor of the CCR5-gp120 interaction, used as anti-HIV drug.
AMG-176, AZD5991, S64315, inhibitors of myeloid cell leukemia 1 (Mcl-1) protein and its interactions
See also
Glycan-protein interactions
3did
Allostery
Biological network
Biological machines
DIMA (database)
Enzyme catalysis
HitPredict
Human interactome
IsoBase
Multiprotein complex
Protein domain dynamics
Protein flexibility
Protein structure
Protein–protein interaction prediction
Protein–protein interaction screening
Systems biology
References
Further reading
External links
Protein–Protein Interaction Databases
Library of Modulators of Protein–Protein Interactions (PPI)
Proteomics
Signal transduction
Biophysics
Biochemistry methods
Biotechnology
Quantum biochemistry
Protein–protein interaction assays
Protein complexes | 0.785184 | 0.985271 | 0.773619 |
Chemical species | Chemical species are a specific form of chemical substance or chemically identical molecular entities that have the same molecular energy level at a specified timescale. These entities are classified through bonding types and relative abundance of isotopes. Types of chemical species can be classified based on the type of molecular entity and can be either an atomic, molecular, ionic or radical species.
Classification
Generally, a chemical species is defined as a chemical identity that has the same set of molecular energy levels in a defined timescale (i.e. an experiment). These energy levels determine the way the chemical species will interact with others through properties such as bonding or isotopic compositions. The chemical species can be an atom, molecule, ion, or radical, with a specific chemical name and chemical formula.
In supramolecular chemistry, chemical species are structures created by forming or breaking bonds between molecules, such as hydrogen bonding, dipole-dipole bonds, etc. These types of bonds can determine the physical property of chemical species in a liquid or solid state.
The term is also applied to a set of chemically identical atomic or molecular structures in a solid compound.
Types of Chemical Species
Atomic species: Specific form of an element defined by the atom's isotope, electronic or oxidation state. Argon is an atomic species of formula Ar.
Molecular species: Groups of atoms that are held together by chemical bonds. An example is ozone, which has the chemical formula .
Ionic species: Atoms or molecules that have gained or lost electrons, resulting in a net electrical charge that can be either positively (cation) or negatively charged (anion).
Species with an overall positive charge will be a cationic species. The sodium ion is an example of a cationic species and its formula is Na+.
Species with an overall negative charge will be an anionic species. Chloride is an anionic species, and its formula is Cl−.
Radical species: Molecules or atoms with unpaired electrons. Triarlborane anion is a radical species and its formula is Ar3B−
Chemicals can be two different types of species. For example, nitrate is a molecular and ionic species, with its formula being NO3−.
Note that DNA is not a species; the name is generically applied to many molecules of different formulas (each DNA molecule is unique).
See also
List of particles
References
Chemical substances | 0.78602 | 0.984201 | 0.773602 |
Schema (psychology) | In psychology and cognitive science, a schema (: schemata or schemas) describes a pattern of thought or behavior that organizes categories of information and the relationships among them. It can also be described as a mental structure of preconceived ideas, a framework representing some aspect of the world, or a system of organizing and perceiving new information, such as a mental schema or conceptual model. Schemata influence attention and the absorption of new knowledge: people are more likely to notice things that fit into their schema, while re-interpreting contradictions to the schema as exceptions or distorting them to fit. Schemata have a tendency to remain unchanged, even in the face of contradictory information. Schemata can help in understanding the world and the rapidly changing environment. People can organize new perceptions into schemata quickly as most situations do not require complex thought when using schema, since automatic thought is all that is required.
People use schemata to organize current knowledge and provide a framework for future understanding. Examples of schemata include mental models, social schemas, stereotypes, social roles, scripts, worldviews, heuristics, and archetypes. In Piaget's theory of development, children construct a series of schemata, based on the interactions they experience, to help them understand the world.
History
"Schema" comes from the Greek word schēmat or schēma, meaning "figure".
Prior to its use in psychology, the term "schema" had primarily seen use in philosophy. For instance, "schemata" (especially "transcendental schemata") are crucial to the architectonic system devised by Immanuel Kant in his Critique of Pure Reason.
Early developments of the idea in psychology emerged with the gestalt psychologists (founded originally by Max Wertheimer) and Jean Piaget. The term schéma was introduced by Piaget in 1923. In Piaget's later publications, action (operative or procedural) schémes were distinguished from figurative (representational) schémas, although together they may be considered a schematic duality. In subsequent discussions of Piaget in English, schema was often a mistranslation of Piaget's original French schéme. The distinction has been of particular importance in theories of embodied cognition and ecological psychology.
This concept was first described in the works of British psychologist Frederic Bartlett, who drew on the term body schema used by neurologist Henry Head in 1932. In 1952, Jean Piaget, who was credited with the first cognitive development theory of schemas, popularized this ideology. By 1977, it was expanded into schema theory by educational psychologist Richard C. Anderson. Since then, other terms have been used to describe schema such as "frame", "scene", and "script".
Schematic processing
Through the use of schemata, a heuristic technique to encode and retrieve memories, the majority of typical situations do not require much strenuous processing. People can quickly organize new perceptions into schemata and act without effort. The process, however, is not always accurate, and people may develop illusory correlations, which is the tendency to form inaccurate or unfounded associations between categories, especially when the information is distinctive.
Nevertheless, schemata can influence and hamper the uptake of new information, such as when existing stereotypes, giving rise to limited or biased discourses and expectations, lead an individual to "see" or "remember" something that has not happened because it is more believable in terms of his/her schema. For example, if a well-dressed businessman draws a knife on a vagrant, the schemata of onlookers may (and often do) lead them to "remember" the vagrant pulling the knife. Such distortion of memory has been demonstrated. (See below.) Furthermore, it has also been seen to affect the formation of episodic memory in humans. For instance, one is more likely to remember a pencil case in an office than a skull, even if both were present in the office, when tested on certain recall conditions.
Schemata are interrelated and multiple conflicting schemata can be applied to the same information. Schemata are generally thought to have a level of activation, which can spread among related schemata. Through different factors such as current activation, accessibility, priming, and emotion, a specific schema can be selected.
Accessibility is how easily a schema can come to mind, and is determined by personal experience and expertise. This can be used as a cognitive shortcut, meaning it allows the most common explanation to be chosen for new information.
With priming (an increased sensitivity to a particular schema due to a recent experience), a brief imperceptible stimulus temporarily provides enough activation to a schema so that it is used for subsequent ambiguous information. Although this may suggest the possibility of subliminal messages, the effect of priming is so fleeting that it is difficult to detect outside laboratory conditions.
Background research
Frederic Bartlett
The original concept of schemata is linked with that of reconstructive memory as proposed and demonstrated in a series of experiments by Frederic Bartlett. Bartlett began presenting participants with information that was unfamiliar to their cultural backgrounds and expectations while subsequently monitoring how they recalled these different items of information (stories, etc). Bartlett was able to establish that individuals' existing schemata and stereotypes influence not only how they interpret "schema-foreign" new information but also how they recall the information over time. One of his most famous investigations involved asking participants to read a Native American folk tale, "The War of the Ghosts", and recall it several times up to a year later. All the participants transformed the details of the story in such a way that it reflected their cultural norms and expectations, i.e. in line with their schemata. The factors that influenced their recall were:
Omission of information that was considered irrelevant to a participant;
Transformation of some of the details, or of the order in which events, etc., were recalled; a shift of focus and emphasis in terms of what was considered the most important aspects of the tale;
Rationalization: details and aspects of the tale that would not make sense would be "padded out" and explained in an attempt to render them comprehensible to the individual in question;
Cultural shifts: the content and the style of the story were altered in order to appear more coherent and appropriate in terms of the cultural background of the participant.
Bartlett's work was crucially important in demonstrating that long-term memories are neither fixed nor unchanging but are constantly being adjusted as schemata evolve with experience. His work contributed to a framework of memory retrieval in which people construct the past and present in a constant process of narrative/discursive adjustment. Much of what people "remember" is confabulated narrative (adjusted and rationalized) which allows them to think of the past as a continuous and coherent string of events, even though it is probable that large sections of memory (both episodic and semantic) are irretrievable or inaccurate at any given time.
An important step in the development of schema theory was taken by the work of D.E. Rumelhart describing the understanding of narrative and stories. Further work on the concept of schemata was conducted by W.F. Brewer and J.C. Treyens, who demonstrated that the schema-driven expectation of the presence of an object was sometimes sufficient to trigger its incorrect recollection. An experiment was conducted where participants were requested to wait in a room identified as an academic's study and were later asked about the room's contents. A number of the participants recalled having seen books in the study whereas none were present. Brewer and Treyens concluded that the participants' expectations that books are present in academics' studies were enough to prevent their accurate recollection of the scenes.
In the 1970s, computer scientist Marvin Minsky was trying to develop machines that would have human-like abilities. When he was trying to create solutions for some of the difficulties he encountered he came across Bartlett's work and concluded that if he was ever going to get machines to act like humans he needed them to use their stored knowledge to carry out processes. A frame construct was a way to represent knowledge in machines, while his frame construct can be seen as an extension and elaboration of the schema construct. He created the frame knowledge concept as a way to interact with new information. He proposed that fixed and broad information would be represented as the frame, but it would also be composed of slots that would accept a range of values; but if the world did not have a value for a slot, then it would be filled by a default value. Because of Minsky's work, computers now have a stronger impact on psychology. In the 1980s, David Rumelhart extended Minsky's ideas, creating an explicitly psychological theory of the mental representation of complex knowledge.
Roger Schank and Robert Abelson developed the idea of a script, which was known as a generic knowledge of sequences of actions. This led to many new empirical studies, which found that providing relevant schema can help improve comprehension and recall on passages.
Schemata have also been viewed from a sociocultural perspective with contributions from Lev Vygotsky, in which there is a transactional relationship between the development of a schema and the environment that influences it, such that the schema does not develop independently as a construct in the mind, but carries all the aspects of the history, social, and cultural meaning which influences its development. Schemata are not just scripts or frameworks to be called upon, but are active processes for solving problems and interacting with the world. However, schemas can also contribute to influential outside sociocultural perspectives, like the development of racism tendencies, disregard for marginalized communities and cultural misconceptions.
Modification
New information that falls within an individual's schema is easily remembered and incorporated into their worldview. However, when new information is perceived that does not fit a schema, many things can happen. One of the most common reactions is for a person to simply ignore or quickly forget the new information they acquired. This can happen on an unconscious level—meaning, unintentionally an individual may not even perceive the new information. People may also interpret the new information in a way that minimizes how much they must change their schemata. For example, Bob thinks that chickens do not lay eggs. He then sees a chicken laying an egg. Instead of changing the part of his schema that says "chickens don't lay eggs", he is likely to adopt the belief that the animal in question that he has just seen laying an egg is not a real chicken. This is an example of disconfirmation bias, the tendency to set higher standards for evidence that contradicts one's expectations. This is also known as cognitive dissonance. However, when the new information cannot be ignored, existing schemata must be changed or new schemata must be created (accommodation).
Jean Piaget (1896–1980) was known best for his work with development of human knowledge. He believed knowledge was constructed on cognitive structures, and he believed people develop cognitive structures by accommodating and assimilating information. Accommodation is creating new schema that will fit better with the new environment or adjusting old schema. Accommodation could also be interpreted as putting restrictions on a current schema, and usually comes about when assimilation has failed. Assimilation is when people use a current schema to understand the world around them. Piaget thought that schemata are applied to everyday life and therefore people accommodate and assimilate information naturally. For example, if this chicken has red feathers, Bob can form a new schemata that says "chickens with red feathers can lay eggs". This schemata, in the future, will either be changed or removed entirely.
Assimilation is the reuse of schemata to fit the new information. For example, when a person sees an unfamiliar dog, they will probably just integrate it into their dog schema. However, if the dog behaves strangely, and in ways that does not seem dog-like, there will be an accommodation as a new schema is formed for that particular dog. With accommodation and assimilation comes the idea of equilibrium. Piaget describes equilibrium as a state of cognition that is balanced when schema are capable of explaining what it sees and perceives. When information is new and cannot fit into a previous existing schema, disequilibrium can happen. When disequilibrium happens, it means the person is frustrated and will try to restore the coherence of his or her cognitive structures through accommodation. If the new information is taken then assimilation of the new information will proceed until they find that they must make a new adjustment to it later down the road, but for now the person remains at equilibrium again. The process of equilibration is when people move from the equilibrium phase to the disequilibrium phase and back into equilibrium.
In view of this, a person's new schemata may be an expansion of the schemata into a subtype. This allows for the information to be incorporated into existing beliefs without contradicting them. An example in social psychology would be the combination of a person's beliefs about women and their beliefs about business. If women are not generally perceived to be in business, but the person meets a woman who is, a new subtype of businesswoman may be created, and the information perceived will be incorporated into this subtype. Activation of either woman or business schema may then make further available the schema of "businesswoman". This also allows for previous beliefs about women or those in business to persist. Rather than modifying the schemata related to women or to business persons, the subtype is its own category.
Self-schema
Schemata about oneself are considered to be grounded in the present and based on past experiences. Memories are framed in the light of one's self-conception. For example, people who have positive self-schemata (i.e. most people) selectively attend to flattering information and ignore unflattering information, with the consequence that flattering information is subject to deeper encoding, and therefore superior recall. Even when encoding is equally strong for positive and negative feedback, positive feedback is more likely to be recalled. Moreover, memories may even be distorted to become more favorable: for example, people typically remember exam grades as having been better than they actually were. However, when people have negative self views, memories are generally biased in ways that validate the negative self-schema; people with low self-esteem, for instance, are prone to remember more negative information about themselves than positive information. Thus, memory tends to be biased in a way that validates the agent's pre-existing self-schema.
There are three major implications of self-schemata. First, information about oneself is processed faster and more efficiently, especially consistent information. Second, one retrieves and remembers information that is relevant to one's self-schema. Third, one will tend to resist information in the environment that is contradictory to one's self-schema. For instance, students with a particular self-schema prefer roommates whose view of them is consistent with that schema. Students who end up with roommates whose view of them is inconsistent with their self-schema are more likely to try to find a new roommate, even if this view is positive. This is an example of self-verification.
As researched by Aaron Beck, automatically activated negative self-schemata are a large contributor to depression. According to Cox, Abramson, Devine, and Hollon (2012), these self-schemata are essentially the same type of cognitive structure as stereotypes studied by prejudice researchers (e.g., they are both well-rehearsed, automatically activated, difficult to change, influential toward behavior, emotions, and judgments, and bias information processing).
The self-schema can also be self-perpetuating. It can represent a particular role in society that is based on stereotype, for example: "If a mother tells her daughter she looks like a tom boy, her daughter may react by choosing activities that she imagines a tom boy would do. Conversely, if the mother tells her she looks like a princess, her daughter might choose activities thought to be more feminine." This is an example of the self-schema becoming self-perpetuating when the person at hand chooses an activity that was based on an expectation rather than their desires.
Schema therapy
Schema therapy was founded by Jeffrey Young and represents a development of cognitive behavioral therapy (CBT) specifically for treating personality disorders. Early maladaptive schemata are described by Young as broad and pervasive themes or patterns made up of memories, feelings, sensations, and thoughts regarding oneself and one's relationships with others; they can be a contributing factor to treatment outcomes of mental disorders and the maintenance of ideas, beliefs, and behaviors towards oneself and others. They are considered to develop during childhood or adolescence, and to be dysfunctional in that they lead to self-defeating behavior. Examples include schemata of abandonment/instability, mistrust/abuse, emotional deprivation, and defectiveness/shame.
Schema therapy blends CBT with elements of Gestalt therapy, object relations, constructivist and psychoanalytic therapies in order to treat the characterological difficulties which both constitute personality disorders and which underlie many of the chronic depressive or anxiety-involving symptoms which present in the clinic. Young said that CBT may be an effective treatment for presenting symptoms, but without the conceptual or clinical resources for tackling the underlying structures (maladaptive schemata) which consistently organize the patient's experience, the patient is likely to lapse back into unhelpful modes of relating to others and attempting to meet their needs. Young focused on pulling from different therapies equally when developing schema therapy. Cognitive behavioral methods work to increase the availability and strength of adaptive schemata while reducing the maladaptive ones. This may involve identifying the existing schema and then identifying an alternative to replace it. Difficulties arise as these types of schema often exist in absolutes; modification then requires replacement to be in absolutes, otherwise the initial belief may persist. The difference between cognitive behavioral therapy and schema therapy according to Young is the latter "emphasizes lifelong patterns, affective change techniques, and the therapeutic relationship, with special emphasis on limited reparenting". He recommended this therapy would be ideal for clients with difficult and chronic psychological disorders. Some examples would be eating disorders and personality disorders. He has also had success with this therapy in relation to depression and substance abuse.
See also
Cultural schema theory
Memetics
Personal construct theory
Primal world beliefs
Relational frame theory
Social cognition
Speed reading
References
External links
Huitt, W. (2018). Understanding reality: The importance of mental representations. In W. Huitt (Ed.), Becoming a Brilliant Star: Twelve core ideas supporting holistic education (pp. 65-81). IngramSpark.
Cognitive psychology
Cognitive science
Psychological adjustment
Psychological theories | 0.776584 | 0.996153 | 0.773596 |
Nomothetic | Nomothetic literally means "proposition of the law" (Greek derivation) and is used in philosophy, psychology, and law with differing meanings.
Etymology
In the general humanities usage, nomothetic may be used in the sense of "able to lay down the law", "having the capacity to posit lasting sense" (from , from nomothetēs νομοθέτης "lawgiver", from νόμος "law" and the Proto-Indo-European etymon nem- meaning to "take, give, account, apportion")), e.g., 'the nomothetic capability of the early mythmakers' or 'the nomothetic skill of Adam, given the power to name things.'
In psychology
In psychology, nomothetic refers to research about general principles or generalizations across a population of individuals. For example, the Big Five model of personality and Piaget's developmental stages are nomothetic models of personality traits and cognitive development respectively. In contrast, idiographic refers to research about the unique and contingent aspects of individuals, as in psychological case studies.
In psychological testing, nomothetic measures are contrasted to ipsative or idiothetic measures, where nomothetic measures are measures that are observed on a relatively large sample and have a more general outlook.
In other fields
In sociology, nomothetic explanation presents a generalized understanding of a given case, and is contrasted with idiographic explanation, which presents a full description of a given case. Nomothetic approaches are most appropriate to the deductive approach to social research inasmuch as they include the more highly structured research methodologies which can be replicated and controlled, and which focus on generating quantitative data with a view to explaining causal relationships.
In anthropology, nomothetic refers to the use of generalization rather than specific properties in the context of a group as an entity.
In history, nomothetic refers to the philosophical shift in emphasis away from traditional presentation of historical text restricted to wars, laws, dates, and such, to a broader appreciation and deeper understanding.
See also
Nomothetic and idiographic
Nomological
References
Sociological terminology | 0.800185 | 0.966758 | 0.773586 |
Life | Life is a quality that distinguishes matter that has biological processes, such as signaling and self-sustaining processes, from matter that does not. It is defined descriptively by the capacity for homeostasis, organisation, metabolism, growth, adaptation, response to stimuli, and reproduction. All life over time eventually reaches a state of death, and none is immortal. Many philosophical definitions of living systems have been proposed, such as self-organizing systems. Viruses in particular make definition difficult as they replicate only in host cells. Life exists all over the Earth in air, water, and soil, with many ecosystems forming the biosphere. Some of these are harsh environments occupied only by extremophiles.
Life has been studied since ancient times, with theories such as Empedocles's materialism asserting that it was composed of four eternal elements, and Aristotle's hylomorphism asserting that living things have souls and embody both form and matter. Life originated at least 3.5 billion years ago, resulting in a universal common ancestor. This evolved into all the species that exist now, by way of many extinct species, some of which have left traces as fossils. Attempts to classify living things, too, began with Aristotle. Modern classification began with Carl Linnaeus's system of binomial nomenclature in the 1740s.
Living things are composed of biochemical molecules, formed mainly from a few core chemical elements. All living things contain two types of large molecule, proteins and nucleic acids, the latter usually both DNA and RNA: these carry the information needed by each species, including the instructions to make each type of protein. The proteins, in turn, serve as the machinery which carries out the many chemical processes of life. The cell is the structural and functional unit of life. Smaller organisms, including prokaryotes (bacteria and archaea), consist of small single cells. Larger organisms, mainly eukaryotes, can consist of single cells or may be multicellular with more complex structure. Life is only known to exist on Earth but extraterrestrial life is thought probable. Artificial life is being simulated and explored by scientists and engineers.
Definitions
Challenge
The definition of life has long been a challenge for scientists and philosophers. This is partially because life is a process, not a substance. This is complicated by a lack of knowledge of the characteristics of living entities, if any, that may have developed outside Earth. Philosophical definitions of life have also been put forward, with similar difficulties on how to distinguish living things from the non-living. Legal definitions of life have been debated, though these generally focus on the decision to declare a human dead, and the legal ramifications of this decision. At least 123 definitions of life have been compiled.
Descriptive
Since there is no consensus for a definition of life, most current definitions in biology are descriptive. Life is considered a characteristic of something that preserves, furthers or reinforces its existence in the given environment. This implies all or most of the following traits:
Homeostasis: regulation of the internal environment to maintain a constant state; for example, sweating to reduce temperature.
Organisation: being structurally composed of one or more cells – the basic units of life.
Metabolism: transformation of energy, used to convert chemicals into cellular components (anabolism) and to decompose organic matter (catabolism). Living things require energy for homeostasis and other activities.
Growth: maintenance of a higher rate of anabolism than catabolism. A growing organism increases in size and structure.
Adaptation: the evolutionary process whereby an organism becomes better able to live in its habitat.
Response to stimuli: such as the contraction of a unicellular organism away from external chemicals, the complex reactions involving all the senses of multicellular organisms, or the motion of the leaves of a plant turning toward the sun (phototropism), and chemotaxis.
Reproduction: the ability to produce new individual organisms, either asexually from a single parent organism or sexually from two parent organisms.
Physics
From a physics perspective, an organism is a thermodynamic system with an organised molecular structure that can reproduce itself and evolve as survival dictates. Thermodynamically, life has been described as an open system which makes use of gradients in its surroundings to create imperfect copies of itself. Another way of putting this is to define life as "a self-sustained chemical system capable of undergoing Darwinian evolution", a definition adopted by a NASA committee attempting to define life for the purposes of exobiology, based on a suggestion by Carl Sagan. This definition, however, has been widely criticised because according to it, a single sexually reproducing individual is not alive as it is incapable of evolving on its own.
Living systems
Others take a living systems theory viewpoint that does not necessarily depend on molecular chemistry. One systemic definition of life is that living things are self-organizing and autopoietic (self-producing). Variations of this include Stuart Kauffman's definition as an autonomous agent or a multi-agent system capable of reproducing itself, and of completing at least one thermodynamic work cycle. This definition is extended by the evolution of novel functions over time.
Death
Death is the termination of all vital functions or life processes in an organism or cell.
One of the challenges in defining death is in distinguishing it from life. Death would seem to refer to either the moment life ends, or when the state that follows life begins. However, determining when death has occurred is difficult, as cessation of life functions is often not simultaneous across organ systems. Such determination, therefore, requires drawing conceptual lines between life and death. This is problematic because there is little consensus over how to define life. The nature of death has for millennia been a central concern of the world's religious traditions and of philosophical inquiry. Many religions maintain faith in either a kind of afterlife or reincarnation for the soul, or resurrection of the body at a later date.
Viruses
Whether or not viruses should be considered as alive is controversial. They are most often considered as just gene coding replicators rather than forms of life. They have been described as "organisms at the edge of life" because they possess genes, evolve by natural selection, and replicate by making multiple copies of themselves through self-assembly. However, viruses do not metabolise and they require a host cell to make new products. Virus self-assembly within host cells has implications for the study of the origin of life, as it may support the hypothesis that life could have started as self-assembling organic molecules.
History of study
Materialism
Some of the earliest theories of life were materialist, holding that all that exists is matter, and that life is merely a complex form or arrangement of matter. Empedocles (430 BC) argued that everything in the universe is made up of a combination of four eternal "elements" or "roots of all": earth, water, air, and fire. All change is explained by the arrangement and rearrangement of these four elements. The various forms of life are caused by an appropriate mixture of elements.
Democritus (460 BC) was an atomist; he thought that the essential characteristic of life was having a soul (psyche), and that the soul, like everything else, was composed of fiery atoms. He elaborated on fire because of the apparent connection between life and heat, and because fire moves.
Plato, in contrast, held that the world was organised by permanent forms, reflected imperfectly in matter; forms provided direction or intelligence, explaining the regularities observed in the world. The mechanistic materialism that originated in ancient Greece was revived and revised by the French philosopher René Descartes (1596–1650), who held that animals and humans were assemblages of parts that together functioned as a machine. This idea was developed further by Julien Offray de La Mettrie (1709–1750) in his book L'Homme Machine. In the 19th century the advances in cell theory in biological science encouraged this view. The evolutionary theory of Charles Darwin (1859) is a mechanistic explanation for the origin of species by means of natural selection. At the beginning of the 20th century Stéphane Leduc (1853–1939) promoted the idea that biological processes could be understood in terms of physics and chemistry, and that their growth resembled that of inorganic crystals immersed in solutions of sodium silicate. His ideas, set out in his book La biologie synthétique, were widely dismissed during his lifetime, but has incurred a resurgence of interest in the work of Russell, Barge and colleagues.
Hylomorphism
Hylomorphism is a theory first expressed by the Greek philosopher Aristotle (322 BC). The application of hylomorphism to biology was important to Aristotle, and biology is extensively covered in his extant writings. In this view, everything in the material universe has both matter and form, and the form of a living thing is its soul (Greek psyche, Latin anima). There are three kinds of souls: the vegetative soul of plants, which causes them to grow and decay and nourish themselves, but does not cause motion and sensation; the animal soul, which causes animals to move and feel; and the rational soul, which is the source of consciousness and reasoning, which (Aristotle believed) is found only in man. Each higher soul has all of the attributes of the lower ones. Aristotle believed that while matter can exist without form, form cannot exist without matter, and that therefore the soul cannot exist without the body.
This account is consistent with teleological explanations of life, which account for phenomena in terms of purpose or goal-directedness. Thus, the whiteness of the polar bear's coat is explained by its purpose of camouflage. The direction of causality (from the future to the past) is in contradiction with the scientific evidence for natural selection, which explains the consequence in terms of a prior cause. Biological features are explained not by looking at future optimal results, but by looking at the past evolutionary history of a species, which led to the natural selection of the features in question.
Spontaneous generation
Spontaneous generation was the belief that living organisms can form without descent from similar organisms. Typically, the idea was that certain forms such as fleas could arise from inanimate matter such as dust or the supposed seasonal generation of mice and insects from mud or garbage.
The theory of spontaneous generation was proposed by Aristotle, who compiled and expanded the work of prior natural philosophers and the various ancient explanations of the appearance of organisms; it was considered the best explanation for two millennia. It was decisively dispelled by the experiments of Louis Pasteur in 1859, who expanded upon the investigations of predecessors such as Francesco Redi. Disproof of the traditional ideas of spontaneous generation is no longer controversial among biologists.
Vitalism
Vitalism is the belief that there is a non-material life-principle. This originated with Georg Ernst Stahl (17th century), and remained popular until the middle of the 19th century. It appealed to philosophers such as Henri Bergson, Friedrich Nietzsche, and Wilhelm Dilthey, anatomists like Xavier Bichat, and chemists like Justus von Liebig. Vitalism included the idea that there was a fundamental difference between organic and inorganic material, and the belief that organic material can only be derived from living things. This was disproved in 1828, when Friedrich Wöhler prepared urea from inorganic materials. This Wöhler synthesis is considered the starting point of modern organic chemistry. It is of historical significance because for the first time an organic compound was produced in inorganic reactions.
During the 1850s Hermann von Helmholtz, anticipated by Julius Robert von Mayer, demonstrated that no energy is lost in muscle movement, suggesting that there were no "vital forces" necessary to move a muscle. These results led to the abandonment of scientific interest in vitalistic theories, especially after Eduard Buchner's demonstration that alcoholic fermentation could occur in cell-free extracts of yeast. Nonetheless, belief still exists in pseudoscientific theories such as homoeopathy, which interprets diseases and sickness as caused by disturbances in a hypothetical vital force or life force.
Development
Origin of life
The age of Earth is about 4.54 billion years. Life on Earth has existed for at least 3.5 billion years, with the oldest physical traces of life dating back 3.7 billion years. Estimates from molecular clocks, as summarised in the TimeTree public database, place the origin of life around 4.0 billion years ago. Hypotheses on the origin of life attempt to explain the formation of a universal common ancestor from simple organic molecules via pre-cellular life to protocells and metabolism. In 2016, a set of 355 genes from the last universal common ancestor was tentatively identified.
The biosphere is postulated to have developed, from the origin of life onwards, at least some 3.5 billion years ago. The earliest evidence for life on Earth includes biogenic graphite found in 3.7 billion-year-old metasedimentary rocks from Western Greenland and microbial mat fossils found in 3.48 billion-year-old sandstone from Western Australia. More recently, in 2015, "remains of biotic life" were found in 4.1 billion-year-old rocks in Western Australia. In 2017, putative fossilised microorganisms (or microfossils) were announced to have been discovered in hydrothermal vent precipitates in the Nuvvuagittuq Belt of Quebec, Canada that were as old as 4.28 billion years, the oldest record of life on Earth, suggesting "an almost instantaneous emergence of life" after ocean formation 4.4 billion years ago, and not long after the formation of the Earth 4.54 billion years ago.
Evolution
Evolution is the change in heritable characteristics of biological populations over successive generations. It results in the appearance of new species and often the disappearance of old ones. Evolution occurs when evolutionary processes such as natural selection (including sexual selection) and genetic drift act on genetic variation, resulting in certain characteristics increasing or decreasing in frequency within a population over successive generations. The process of evolution has given rise to biodiversity at every level of biological organisation.
Fossils
Fossils are the preserved remains or traces of organisms from the remote past. The totality of fossils, both discovered and undiscovered, and their placement in layers (strata) of sedimentary rock is known as the fossil record. A preserved specimen is called a fossil if it is older than the arbitrary date of 10,000 years ago. Hence, fossils range in age from the youngest at the start of the Holocene Epoch to the oldest from the Archaean Eon, up to 3.4 billion years old.
Extinction
Extinction is the process by which a species dies out. The moment of extinction is the death of the last individual of that species. Because a species' potential range may be very large, determining this moment is difficult, and is usually done retrospectively after a period of apparent absence. Species become extinct when they are no longer able to survive in changing habitat or against superior competition. Over 99% of all the species that have ever lived are now extinct. Mass extinctions may have accelerated evolution by providing opportunities for new groups of organisms to diversify.
Environmental conditions
The diversity of life on Earth is a result of the dynamic interplay between genetic opportunity, metabolic capability, environmental challenges, and symbiosis. For most of its existence, Earth's habitable environment has been dominated by microorganisms and subjected to their metabolism and evolution. As a consequence of these microbial activities, the physical-chemical environment on Earth has been changing on a geologic time scale, thereby affecting the path of evolution of subsequent life. For example, the release of molecular oxygen by cyanobacteria as a by-product of photosynthesis induced global changes in the Earth's environment. Because oxygen was toxic to most life on Earth at the time, this posed novel evolutionary challenges, and ultimately resulted in the formation of Earth's major animal and plant species. This interplay between organisms and their environment is an inherent feature of living systems.
Biosphere
The biosphere is the global sum of all ecosystems. It can also be termed as the zone of life on Earth, a closed system (apart from solar and cosmic radiation and heat from the interior of the Earth), and largely self-regulating. Organisms exist in every part of the biosphere, including soil, hot springs, inside rocks at least deep underground, the deepest parts of the ocean, and at least high in the atmosphere. For example, spores of Aspergillus niger have been detected in the mesosphere at an altitude of 48 to 77 km. Under test conditions, life forms have been observed to survive in the vacuum of space. Life forms thrive in the deep Mariana Trench, and inside rocks up to below the sea floor under of ocean off the coast of the northwestern United States, and beneath the seabed off Japan. In 2014, life forms were found living below the ice of Antarctica. Expeditions of the International Ocean Discovery Program found unicellular life in 120 °C sediment 1.2 km below seafloor in the Nankai Trough subduction zone. According to one researcher, "You can find microbes everywhere—they're extremely adaptable to conditions, and survive wherever they are."
Range of tolerance
The inert components of an ecosystem are the physical and chemical factors necessary for life—energy (sunlight or chemical energy), water, heat, atmosphere, gravity, nutrients, and ultraviolet solar radiation protection. In most ecosystems, the conditions vary during the day and from one season to the next. To live in most ecosystems, then, organisms must be able to survive a range of conditions, called the "range of tolerance". Outside that are the "zones of physiological stress", where the survival and reproduction are possible but not optimal. Beyond these zones are the "zones of intolerance", where survival and reproduction of that organism is unlikely or impossible. Organisms that have a wide range of tolerance are more widely distributed than organisms with a narrow range of tolerance.
Extremophiles
To survive, some microorganisms have evolved to withstand freezing, complete desiccation, starvation, high levels of radiation exposure, and other physical or chemical challenges. These extremophile microorganisms may survive exposure to such conditions for long periods. They excel at exploiting uncommon sources of energy. Characterization of the structure and metabolic diversity of microbial communities in such extreme environments is ongoing.
Classification
Antiquity
The first classification of organisms was made by the Greek philosopher Aristotle (384–322 BC), who grouped living things as either plants or animals, based mainly on their ability to move. He distinguished animals with blood from animals without blood, which can be compared with the concepts of vertebrates and invertebrates respectively, and divided the blooded animals into five groups: viviparous quadrupeds (mammals), oviparous quadrupeds (reptiles and amphibians), birds, fishes and whales. The bloodless animals were divided into five groups: cephalopods, crustaceans, insects (which included the spiders, scorpions, and centipedes), shelled animals (such as most molluscs and echinoderms), and "zoophytes" (animals that resemble plants). This theory remained dominant for more than a thousand years.
Linnaean
In the late 1740s, Carl Linnaeus introduced his system of binomial nomenclature for the classification of species. Linnaeus attempted to improve the composition and reduce the length of the previously used many-worded names by abolishing unnecessary rhetoric, introducing new descriptive terms and precisely defining their meaning.
The fungi were originally treated as plants. For a short period Linnaeus had classified them in the taxon Vermes in Animalia, but later placed them back in Plantae. Herbert Copeland classified the Fungi in his Protoctista, including them with single-celled organisms and thus partially avoiding the problem but acknowledging their special status. The problem was eventually solved by Whittaker, when he gave them their own kingdom in his five-kingdom system. Evolutionary history shows that the fungi are more closely related to animals than to plants.
As advances in microscopy enabled detailed study of cells and microorganisms, new groups of life were revealed, and the fields of cell biology and microbiology were created. These new organisms were originally described separately in protozoa as animals and protophyta/thallophyta as plants, but were united by Ernst Haeckel in the kingdom Protista; later, the prokaryotes were split off in the kingdom Monera, which would eventually be divided into two separate groups, the Bacteria and the Archaea. This led to the six-kingdom system and eventually to the current three-domain system, which is based on evolutionary relationships. However, the classification of eukaryotes, especially of protists, is still controversial.
As microbiology developed, viruses, which are non-cellular, were discovered. Whether these are considered alive has been a matter of debate; viruses lack characteristics of life such as cell membranes, metabolism and the ability to grow or respond to their environments. Viruses have been classed into "species" based on their genetics, but many aspects of such a classification remain controversial.
The original Linnaean system has been modified many times, for example as follows:
The attempt to organise the Eukaryotes into a small number of kingdoms has been challenged. The Protozoa do not form a clade or natural grouping, and nor do the Chromista (Chromalveolata).
Metagenomic
The ability to sequence large numbers of complete genomes has allowed biologists to take a metagenomic view of the phylogeny of the whole tree of life. This has led to the realisation that the majority of living things are bacteria, and that all have a common origin.
Composition
Chemical elements
All life forms require certain core chemical elements for their biochemical functioning. These include carbon, hydrogen, nitrogen, oxygen, phosphorus, and sulfur—the elemental macronutrients for all organisms. Together these make up nucleic acids, proteins and lipids, the bulk of living matter. Five of these six elements comprise the chemical components of DNA, the exception being sulfur. The latter is a component of the amino acids cysteine and methionine. The most abundant of these elements in organisms is carbon, which has the desirable attribute of forming multiple, stable covalent bonds. This allows carbon-based (organic) molecules to form the immense variety of chemical arrangements described in organic chemistry.
Alternative hypothetical types of biochemistry have been proposed that eliminate one or more of these elements, swap out an element for one not on the list, or change required chiralities or other chemical properties.
DNA
Deoxyribonucleic acid or DNA is a molecule that carries most of the genetic instructions used in the growth, development, functioning and reproduction of all known living organisms and many viruses. DNA and RNA are nucleic acids; alongside proteins and complex carbohydrates, they are one of the three major types of macromolecule that are essential for all known forms of life. Most DNA molecules consist of two biopolymer strands coiled around each other to form a double helix. The two DNA strands are known as polynucleotides since they are composed of simpler units called nucleotides. Each nucleotide is composed of a nitrogen-containing nucleobase—either cytosine (C), guanine (G), adenine (A), or thymine (T)—as well as a sugar called deoxyribose and a phosphate group. The nucleotides are joined to one another in a chain by covalent bonds between the sugar of one nucleotide and the phosphate of the next, resulting in an alternating sugar-phosphate backbone. According to base pairing rules (A with T, and C with G), hydrogen bonds bind the nitrogenous bases of the two separate polynucleotide strands to make double-stranded DNA. This has the key property that each strand contains all the information needed to recreate the other strand, enabling the information to be preserved during reproduction and cell division. Within cells, DNA is organised into long structures called chromosomes. During cell division these chromosomes are duplicated in the process of DNA replication, providing each cell its own complete set of chromosomes. Eukaryotes store most of their DNA inside the cell nucleus.
Cells
Cells are the basic unit of structure in every living thing, and all cells arise from pre-existing cells by division. Cell theory was formulated by Henri Dutrochet, Theodor Schwann, Rudolf Virchow and others during the early nineteenth century, and subsequently became widely accepted. The activity of an organism depends on the total activity of its cells, with energy flow occurring within and between them. Cells contain hereditary information that is carried forward as a genetic code during cell division.
There are two primary types of cells, reflecting their evolutionary origins. Prokaryote cells lack a nucleus and other membrane-bound organelles, although they have circular DNA and ribosomes. Bacteria and Archaea are two domains of prokaryotes. The other primary type is the eukaryote cell, which has a distinct nucleus bound by a nuclear membrane and membrane-bound organelles, including mitochondria, chloroplasts, lysosomes, rough and smooth endoplasmic reticulum, and vacuoles. In addition, their DNA is organised into chromosomes. All species of large complex organisms are eukaryotes, including animals, plants and fungi, though with a wide diversity of protist microorganisms. The conventional model is that eukaryotes evolved from prokaryotes, with the main organelles of the eukaryotes forming through endosymbiosis between bacteria and the progenitor eukaryotic cell.
The molecular mechanisms of cell biology are based on proteins. Most of these are synthesised by the ribosomes through an enzyme-catalyzed process called protein biosynthesis. A sequence of amino acids is assembled and joined based upon gene expression of the cell's nucleic acid. In eukaryotic cells, these proteins may then be transported and processed through the Golgi apparatus in preparation for dispatch to their destination.
Cells reproduce through a process of cell division in which the parent cell divides into two or more daughter cells. For prokaryotes, cell division occurs through a process of fission in which the DNA is replicated, then the two copies are attached to parts of the cell membrane. In eukaryotes, a more complex process of mitosis is followed. However, the result is the same; the resulting cell copies are identical to each other and to the original cell (except for mutations), and both are capable of further division following an interphase period.
Multicellular structure
Multicellular organisms may have first evolved through the formation of colonies of identical cells. These cells can form group organisms through cell adhesion. The individual members of a colony are capable of surviving on their own, whereas the members of a true multi-cellular organism have developed specialisations, making them dependent on the remainder of the organism for survival. Such organisms are formed clonally or from a single germ cell that is capable of forming the various specialised cells that form the adult organism. This specialisation allows multicellular organisms to exploit resources more efficiently than single cells. About 800 million years ago, a minor genetic change in a single molecule, the enzyme GK-PID, may have allowed organisms to go from a single cell organism to one of many cells.
Cells have evolved methods to perceive and respond to their microenvironment, thereby enhancing their adaptability. Cell signalling coordinates cellular activities, and hence governs the basic functions of multicellular organisms. Signaling between cells can occur through direct cell contact using juxtacrine signalling, or indirectly through the exchange of agents as in the endocrine system. In more complex organisms, coordination of activities can occur through a dedicated nervous system.
In the universe
Though life is confirmed only on Earth, many think that extraterrestrial life is not only plausible, but probable or inevitable, possibly resulting in a biophysical cosmology instead of a mere physical cosmology. Other planets and moons in the Solar System and other planetary systems are being examined for evidence of having once supported simple life, and projects such as SETI are trying to detect radio transmissions from possible alien civilisations. Other locations within the Solar System that may host microbial life include the subsurface of Mars, the upper atmosphere of Venus, and subsurface oceans on some of the moons of the giant planets.
Investigation of the tenacity and versatility of life on Earth, as well as an understanding of the molecular systems that some organisms utilise to survive such extremes, is important for the search for extraterrestrial life. For example, lichen could survive for a month in a simulated Martian environment.
Beyond the Solar System, the region around another main-sequence star that could support Earth-like life on an Earth-like planet is known as the habitable zone. The inner and outer radii of this zone vary with the luminosity of the star, as does the time interval during which the zone survives. Stars more massive than the Sun have a larger habitable zone, but remain on the Sun-like "main sequence" of stellar evolution for a shorter time interval. Small red dwarfs have the opposite problem, with a smaller habitable zone that is subject to higher levels of magnetic activity and the effects of tidal locking from close orbits. Hence, stars in the intermediate mass range such as the Sun may have a greater likelihood for Earth-like life to develop. The location of the star within a galaxy may also affect the likelihood of life forming. Stars in regions with a greater abundance of heavier elements that can form planets, in combination with a low rate of potentially habitat-damaging supernova events, are predicted to have a higher probability of hosting planets with complex life. The variables of the Drake equation are used to discuss the conditions in planetary systems where civilisation is most likely to exist, within wide bounds of uncertainty. A "Confidence of Life Detection" scale (CoLD) for reporting evidence of life beyond Earth has been proposed.
Artificial
Artificial life is the simulation of any aspect of life, as through computers, robotics, or biochemistry. Synthetic biology is a new area of biotechnology that combines science and biological engineering. The common goal is the design and construction of new biological functions and systems not found in nature. Synthetic biology includes the broad redefinition and expansion of biotechnology, with the ultimate goals of being able to design and build engineered biological systems that process information, manipulate chemicals, fabricate materials and structures, produce energy, provide food, and maintain and enhance human health and the environment.
See also
Biology, the study of life
Biosignature
Carbon-based life
Central dogma of molecular biology
History of life
Lists of organisms by population
Viable system theory
Notes
References
External links
Vitae (BioLib)
Wikispecies – a free directory of life
Biota (Taxonomicon) (archived 15 July 2014)
Entry on the Stanford Encyclopedia of Philosophy
What Is Life? – by Jaime Green, The Atlantic (archived 5 December 2023)
Main topic articles | 0.773862 | 0.999534 | 0.773502 |
Abstract structure | An abstract structure is an abstraction that might be of the geometric spaces
or a set structure, or a hypostatic abstraction that is defined by a set of mathematical theorems and laws, properties and relationships in a way that is logically if not always historically independent of the structure of contingent experiences, for example, those involving physical objects. Abstract structures are studied not only in logic and mathematics but in the fields that apply them, as computer science and computer graphics, and in the studies that reflect on them, such as philosophy (especially the philosophy of mathematics). Indeed, modern mathematics has been defined in a very general sense as the study of abstract structures (by the Bourbaki group: see discussion there, at algebraic structure and also structure).
An abstract structure may be represented (perhaps with some degree of approximation) by one or more physical objectsthis is called an implementation or instantiation of the abstract structure. But the abstract structure itself is defined in a way that is not dependent on the properties of any particular implementation.
An abstract structure has a richer structure than a concept or an idea. An abstract structure must include precise rules of behaviour which can be used to determine whether a candidate implementation actually matches the abstract structure in question, and it must be free from contradictions. Thus we may debate how well a particular government fits the concept of democracy, but there is no room for debate over whether a given sequence of moves is or is not a valid game of chess (for example Kasparovian approaches).
Examples
A sorting algorithm is an abstract structure, but a recipe is not, because it depends on the properties and quantities of its ingredients.
A simple melody is an abstract structure, but an orchestration is not, because it depends on the properties of particular instruments.
Euclidean geometry is an abstract structure, but the theory of continental drift is not, because it depends on the geology of the Earth.
A formal language is an abstract structure, but a natural language is not, because its rules of grammar and syntax are open to debate and interpretation.
Notes
See also
Abstraction in computer science
Abstraction in general
Abstraction in mathematics
Abstract object
Deductive apparatus
Formal sciences
Mathematical structure
Abstraction
Mathematical terminology
Structure
da:Abstrakt (begreb) | 0.7833 | 0.987455 | 0.773473 |
Reaxys | Reaxys is a web-based tool for the retrieval of information about chemical compounds and data from published literature, including journals and patents. The information includes chemical compounds, chemical reactions, chemical properties, related bibliographic data, substance data with synthesis planning information, as well as experimental procedures from selected journals and patents. It is licensed by Elsevier.
Reaxys was launched in 2009 as the successor to the CrossFire databases. It was developed to provide research chemists with access to current and historical, relevant, organic, inorganic and organometallic chemistry information, from reliable sources via an easy-to-use interface.
Scope and access
One of the primary goals of Reaxys is to provide research chemists with access to experimentally measured data – reactions, physical, chemical or pharmacological – in one universal and factual platform. Content covers organic, medicinal, synthetic, agro, fine, catalyst, inorganic and process chemistry and provides information on structures, reactions, and citations. Additional features include a synthesis planner and access to commercial availability information. There have been regular releases and enhancements to Reaxys since it was first launched, including similarity searching.
Reaxys provides links to Scopus for all matching articles and interoperability with ScienceDirect. Access to the database is subject to an annual license agreement.
Core data
The content covers more than 200 years of chemistry and has been abstracted from several thousands of journal titles, books and patents. Today the data is drawn from selected journals (400 titles) and chemistry patents, and the excerption process for each reaction or substance data included needs to meet three conditions:
It has a chemical structure
It is supported by an experimental fact (property, preparation, reaction)
It has a credible citation
Journals covered include Advanced Synthesis and Catalysis, Journal of American Chemical Society, Journal of Organometallic Chemistry, Synlett and Tetrahedron.
Patents in Reaxys come from the International Patent Classes:
C07 Organic Chemistry
A61K and secondary IPC C07 [Medicinal, Dental, Cosmetic Preparations]
A01N
C09B Dyes
Comparison with other chemical databases
Only a very limited number of studies compared Reaxys with other databases, that provide chemical search functionality, such as SciFinder, ChEMBL, PubChem and Questel-Orbit. For example, the most comprehensive study published in 2020 by researchers from the University of Sydney concluded, that "Reaxys is definitely the first choice, due to both its wealth of data and its precise search facilities...but for less common data and spectra SciFinder contains often more information than Reaxys. PubChem should also be included, not only because of its size and accessibility... Reaxys has well over 100 times the number of experimental property data points <as SciFinder>... In the case of Reaxys and SciFinder, the natural language query algorithms in Reaxys are displayable, but in
SciFinder the algorithms are proprietary and not available."
See also
Beilstein database
References
External links
Chemical databases
Bibliographic databases and indexes | 0.802904 | 0.963336 | 0.773466 |
Numerical analysis | Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of numerical methods that attempt to find approximate solutions of problems rather than the exact ones. Numerical analysis finds application in all fields of engineering and the physical sciences, and in the 21st century also the life and social sciences like economics, medicine, business and even the arts. Current growth in computing power has enabled the use of more complex numerical analysis, providing detailed and realistic mathematical models in science and engineering. Examples of numerical analysis include: ordinary differential equations as found in celestial mechanics (predicting the motions of planets, stars and galaxies), numerical linear algebra in data analysis, and stochastic differential equations and Markov chains for simulating living cells in medicine and biology.
Before modern computers, numerical methods often relied on hand interpolation formulas, using data from large printed tables. Since the mid 20th century, computers calculate the required functions instead, but many of the same formulas continue to be used in software algorithms.
The numerical point of view goes back to the earliest mathematical writings. A tablet from the Yale Babylonian Collection (YBC 7289), gives a sexagesimal numerical approximation of the square root of 2, the length of the diagonal in a unit square.
Numerical analysis continues this long tradition: rather than giving exact symbolic answers translated into digits and applicable only to real-world measurements, approximate solutions within specified error bounds are used.
Key aspects of numerical analysis include:
1. Error Analysis: Understanding and minimizing the errors that arise in numerical calculations, such as round-off errors, truncation errors, and approximation errors.
2. Convergence: Determining whether a numerical method will converge to the correct solution as more iterations or finer steps are taken.
3. Stability: Ensuring that small changes in the input or intermediate steps do not cause large changes in the output, which could lead to incorrect results.
4. Efficiency: Developing algorithms that solve problems in a reasonable amount of time and with manageable computational resources.
5. Conditioning: Analyzing how the solution to a problem is affected by small changes in the input data, which helps in assessing the reliability of the numerical solution.
Numerical analysis plays a crucial role in scientific computing, engineering simulations, financial modeling, and many other fields where mathematical modeling is essential.
Applications
The overall goal of the field of numerical analysis is the design and analysis of techniques to give approximate but accurate solutions to a wide variety of hard problems, many of which are infeasible to solve symbolically:
Advanced numerical methods are essential in making numerical weather prediction feasible.
Computing the trajectory of a spacecraft requires the accurate numerical solution of a system of ordinary differential equations.
Car companies can improve the crash safety of their vehicles by using computer simulations of car crashes. Such simulations essentially consist of solving partial differential equations numerically.
In the financial field, (private investment funds) and other financial institutions use quantitative finance tools from numerical analysis to attempt to calculate the value of stocks and derivatives more precisely than other market participants.
Airlines use sophisticated optimization algorithms to decide ticket prices, airplane and crew assignments and fuel needs. Historically, such algorithms were developed within the overlapping field of operations research.
Insurance companies use numerical programs for actuarial analysis.
History
The field of numerical analysis predates the invention of modern computers by many centuries. Linear interpolation was already in use more than 2000 years ago. Many great mathematicians of the past were preoccupied by numerical analysis, as is obvious from the names of important algorithms like Newton's method, Lagrange interpolation polynomial, Gaussian elimination, or Euler's method. The origins of modern numerical analysis are often linked to a 1947 paper by John von Neumann and Herman Goldstine,
but others consider modern numerical analysis to go back to work by E. T. Whittaker in 1912.
To facilitate computations by hand, large books were produced with formulas and tables of data such as interpolation points and function coefficients. Using these tables, often calculated out to 16 decimal places or more for some functions, one could look up values to plug into the formulas given and achieve very good numerical estimates of some functions. The canonical work in the field is the NIST publication edited by Abramowitz and Stegun, a 1000-plus page book of a very large number of commonly used formulas and functions and their values at many points. The function values are no longer very useful when a computer is available, but the large listing of formulas can still be very handy.
The mechanical calculator was also developed as a tool for hand computation. These calculators evolved into electronic computers in the 1940s, and it was then found that these computers were also useful for administrative purposes. But the invention of the computer also influenced the field of numerical analysis, since now longer and more complicated calculations could be done.
The Leslie Fox Prize for Numerical Analysis was initiated in 1985 by the Institute of Mathematics and its Applications.
Key concepts
Direct and iterative methods
Direct methods compute the solution to a problem in a finite number of steps. These methods would give the precise answer if they were performed in infinite precision arithmetic. Examples include Gaussian elimination, the QR factorization method for solving systems of linear equations, and the simplex method of linear programming. In practice, finite precision is used and the result is an approximation of the true solution (assuming stability).
In contrast to direct methods, iterative methods are not expected to terminate in a finite number of steps, even if infinite precision were possible. Starting from an initial guess, iterative methods form successive approximations that converge to the exact solution only in the limit. A convergence test, often involving the residual, is specified in order to decide when a sufficiently accurate solution has (hopefully) been found. Even using infinite precision arithmetic these methods would not reach the solution within a finite number of steps (in general). Examples include Newton's method, the bisection method, and Jacobi iteration. In computational matrix algebra, iterative methods are generally needed for large problems.
Iterative methods are more common than direct methods in numerical analysis. Some methods are direct in principle but are usually used as though they were not, e.g. GMRES and the conjugate gradient method. For these methods the number of steps needed to obtain the exact solution is so large that an approximation is accepted in the same manner as for an iterative method.
As an example, consider the problem of solving
3x3 + 4 = 28
for the unknown quantity x.
For the iterative method, apply the bisection method to f(x) = 3x3 − 24. The initial values are a = 0, b = 3, f(a) = −24, f(b) = 57.
From this table it can be concluded that the solution is between 1.875 and 2.0625. The algorithm might return any number in that range with an error less than 0.2.
Conditioning
Ill-conditioned problem: Take the function . Note that f(1.1) = 10 and f(1.001) = 1000: a change in x of less than 0.1 turns into a change in f(x) of nearly 1000. Evaluating f(x) near x = 1 is an ill-conditioned problem.
Well-conditioned problem: By contrast, evaluating the same function near x = 10 is a well-conditioned problem. For instance, f(10) = 1/9 ≈ 0.111 and f(11) = 0.1: a modest change in x leads to a modest change in f(x).
Discretization
Furthermore, continuous problems must sometimes be replaced by a discrete problem whose solution is known to approximate that of the continuous problem; this process is called 'discretization'. For example, the solution of a differential equation is a function. This function must be represented by a finite amount of data, for instance by its value at a finite number of points at its domain, even though this domain is a continuum.
Generation and propagation of errors
The study of errors forms an important part of numerical analysis. There are several ways in which error can be introduced in the solution of the problem.
Round-off
Round-off errors arise because it is impossible to represent all real numbers exactly on a machine with finite memory (which is what all practical digital computers are).
Truncation and discretization error
Truncation errors are committed when an iterative method is terminated or a mathematical procedure is approximated and the approximate solution differs from the exact solution. Similarly, discretization induces a discretization error because the solution of the discrete problem does not coincide with the solution of the continuous problem. In the example above to compute the solution of , after ten iterations, the calculated root is roughly 1.99. Therefore, the truncation error is roughly 0.01.
Once an error is generated, it propagates through the calculation. For example, the operation + on a computer is inexact. A calculation of the type is even more inexact.
A truncation error is created when a mathematical procedure is approximated. To integrate a function exactly, an infinite sum of regions must be found, but numerically only a finite sum of regions can be found, and hence the approximation of the exact solution. Similarly, to differentiate a function, the differential element approaches zero, but numerically only a nonzero value of the differential element can be chosen.
Numerical stability and well-posed problems
An algorithm is called numerically stable if an error, whatever its cause, does not grow to be much larger during the calculation. This happens if the problem is well-conditioned, meaning that the solution changes by only a small amount if the problem data are changed by a small amount. To the contrary, if a problem is 'ill-conditioned', then any small error in the data will grow to be a large error.
Both the original problem and the algorithm used to solve that problem can be well-conditioned or ill-conditioned, and any combination is possible.
So an algorithm that solves a well-conditioned problem may be either numerically stable or numerically unstable. An art of numerical analysis is to find a stable algorithm for solving a well-posed mathematical problem.
Areas of study
The field of numerical analysis includes many sub-disciplines. Some of the major ones are:
Computing values of functions
One of the simplest problems is the evaluation of a function at a given point. The most straightforward approach, of just plugging in the number in the formula is sometimes not very efficient. For polynomials, a better approach is using the Horner scheme, since it reduces the necessary number of multiplications and additions. Generally, it is important to estimate and control round-off errors arising from the use of floating-point arithmetic.
Interpolation, extrapolation, and regression
Interpolation solves the following problem: given the value of some unknown function at a number of points, what value does that function have at some other point between the given points?
Extrapolation is very similar to interpolation, except that now the value of the unknown function at a point which is outside the given points must be found.
Regression is also similar, but it takes into account that the data are imprecise. Given some points, and a measurement of the value of some function at these points (with an error), the unknown function can be found. The least squares-method is one way to achieve this.
Solving equations and systems of equations
Another fundamental problem is computing the solution of some given equation. Two cases are commonly distinguished, depending on whether the equation is linear or not. For instance, the equation is linear while is not.
Much effort has been put in the development of methods for solving systems of linear equations. Standard direct methods, i.e., methods that use some matrix decomposition are Gaussian elimination, LU decomposition, Cholesky decomposition for symmetric (or hermitian) and positive-definite matrix, and QR decomposition for non-square matrices. Iterative methods such as the Jacobi method, Gauss–Seidel method, successive over-relaxation and conjugate gradient method are usually preferred for large systems. General iterative methods can be developed using a matrix splitting.
Root-finding algorithms are used to solve nonlinear equations (they are so named since a root of a function is an argument for which the function yields zero). If the function is differentiable and the derivative is known, then Newton's method is a popular choice. Linearization is another technique for solving nonlinear equations.
Solving eigenvalue or singular value problems
Several important problems can be phrased in terms of eigenvalue decompositions or singular value decompositions. For instance, the spectral image compression algorithm is based on the singular value decomposition. The corresponding tool in statistics is called principal component analysis.
Optimization
Optimization problems ask for the point at which a given function is maximized (or minimized). Often, the point also has to satisfy some constraints.
The field of optimization is further split in several subfields, depending on the form of the objective function and the constraint. For instance, linear programming deals with the case that both the objective function and the constraints are linear. A famous method in linear programming is the simplex method.
The method of Lagrange multipliers can be used to reduce optimization problems with constraints to unconstrained optimization problems.
Evaluating integrals
Numerical integration, in some instances also known as numerical quadrature, asks for the value of a definite integral. Popular methods use one of the Newton–Cotes formulas (like the midpoint rule or Simpson's rule) or Gaussian quadrature. These methods rely on a "divide and conquer" strategy, whereby an integral on a relatively large set is broken down into integrals on smaller sets. In higher dimensions, where these methods become prohibitively expensive in terms of computational effort, one may use Monte Carlo or quasi-Monte Carlo methods (see Monte Carlo integration), or, in modestly large dimensions, the method of sparse grids.
Differential equations
Numerical analysis is also concerned with computing (in an approximate way) the solution of differential equations, both ordinary differential equations and partial differential equations.
Partial differential equations are solved by first discretizing the equation, bringing it into a finite-dimensional subspace. This can be done by a finite element method, a finite difference method, or (particularly in engineering) a finite volume method. The theoretical justification of these methods often involves theorems from functional analysis. This reduces the problem to the solution of an algebraic equation.
Software
Since the late twentieth century, most algorithms are implemented in a variety of programming languages. The Netlib repository contains various collections of software routines for numerical problems, mostly in Fortran and C. Commercial products implementing many different numerical algorithms include the IMSL and NAG libraries; a free-software alternative is the GNU Scientific Library.
Over the years the Royal Statistical Society published numerous algorithms in its Applied Statistics (code for these "AS" functions is here);
ACM similarly, in its Transactions on Mathematical Software ("TOMS" code is here).
The Naval Surface Warfare Center several times published its Library of Mathematics Subroutines (code here).
There are several popular numerical computing applications such as MATLAB, TK Solver, S-PLUS, and IDL as well as free and open-source alternatives such as FreeMat, Scilab, GNU Octave (similar to Matlab), and IT++ (a C++ library). There are also programming languages such as R (similar to S-PLUS), Julia, and Python with libraries such as NumPy, SciPy and SymPy. Performance varies widely: while vector and matrix operations are usually fast, scalar loops may vary in speed by more than an order of magnitude.
Many computer algebra systems such as Mathematica also benefit from the availability of arbitrary-precision arithmetic which can provide more accurate results.
Also, any spreadsheet software can be used to solve simple problems relating to numerical analysis.
Excel, for example, has hundreds of available functions, including for matrices, which may be used in conjunction with its built in "solver".
See also
:Category:Numerical analysts
Analysis of algorithms
Computational science
Computational physics
Gordon Bell Prize
Interval arithmetic
List of numerical analysis topics
Local linearization method
Numerical differentiation
Numerical Recipes
Probabilistic numerics
Symbolic-numeric computation
Validated numerics
Notes
References
Citations
Sources
David Kincaid and Ward Cheney: Numerical Analysis : Mathematics of Scientific Computing, 3rd Ed., AMS, ISBN 978-0-8218-4788-6 (2002).
(examples of the importance of accurate arithmetic).
External links
Journals
Numerische Mathematik, volumes 1–..., Springer, 1959–
volumes 1–66, 1959–1994 (searchable; pages are images).
Journal on Numerical Analysis (SINUM), volumes 1–..., SIAM, 1964–
Online texts
Numerical Recipes, William H. Press (free, downloadable previous editions)
First Steps in Numerical Analysis (archived), R.J.Hosking, S.Joe, D.C.Joyce, and J.C.Turner
CSEP (Computational Science Education Project), U.S. Department of Energy (archived 2017-08-01)
Numerical Methods, ch 3. in the Digital Library of Mathematical Functions
Numerical Interpolation, Differentiation and Integration, ch 25. in the Handbook of Mathematical Functions (Abramowitz and Stegun)
Online course material
Numerical Methods, Stuart Dalziel University of Cambridge
Lectures on Numerical Analysis, Dennis Deturck and Herbert S. Wilf University of Pennsylvania
Numerical methods, John D. Fenton University of Karlsruhe
Numerical Methods for Physicists, Anthony O’Hare Oxford University
Lectures in Numerical Analysis (archived), R. Radok Mahidol University
Introduction to Numerical Analysis for Engineering, Henrik Schmidt Massachusetts Institute of Technology
Numerical Analysis for Engineering, D. W. Harder University of Waterloo
Introduction to Numerical Analysis, Doron Levy University of Maryland
Numerical Analysis - Numerical Methods (archived), John H. Mathews California State University Fullerton
Mathematical physics
Computational science | 0.775871 | 0.99688 | 0.77345 |
Biological computing | Biological computers use biologically derived molecules — such as DNA and/or proteins — to perform digital or real computations.
The development of biocomputers has been made possible by the expanding new science of nanobiotechnology. The term nanobiotechnology can be defined in multiple ways; in a more general sense, nanobiotechnology can be defined as any type of technology that uses both nano-scale materials (i.e. materials having characteristic dimensions of 1-100 nanometers) and biologically based materials. A more restrictive definition views nanobiotechnology more specifically as the design and engineering of proteins that can then be assembled into larger, functional structures
The implementation of nanobiotechnology, as defined in this narrower sense, provides scientists with the ability to engineer biomolecular systems specifically so that they interact in a fashion that can ultimately result in the computational functionality of a computer.
Scientific background
Biocomputers use biologically derived materials to perform computational functions. A biocomputer consists of a pathway or series of metabolic pathways involving biological materials that are engineered to behave in a certain manner based upon the conditions (input) of the system. The resulting pathway of reactions that takes place constitutes an output, which is based on the engineering design of the biocomputer and can be interpreted as a form of computational analysis. Three distinguishable types of biocomputers include biochemical computers, biomechanical computers, and bioelectronic computers.
Biochemical computers
Biochemical computers use the immense variety of feedback loops that are characteristic of biological chemical reactions in order to achieve computational functionality. Feedback loops in biological systems take many forms, and many different factors can provide both positive and negative feedback to a particular biochemical process, causing either an increase in chemical output or a decrease in chemical output, respectively. Such factors may include the quantity of catalytic enzymes present, the amount of reactants present, the amount of products present, and the presence of molecules that bind to and thus alter the chemical reactivity of any of the aforementioned factors. Given the nature of these biochemical systems to be regulated through many different mechanisms, one can engineer a chemical pathway comprising a set of molecular components that react to produce one particular product under one set of specific chemical conditions and another particular product under another set of conditions. The presence of the particular product that results from the pathway can serve as a signal, which can be interpreted—along with other chemical signals—as a computational output based upon the starting chemical conditions of the system (the input).
Biomechanical computers
Biomechanical computers are similar to biochemical computers in that they both perform a specific operation that can be interpreted as a functional computation based upon specific initial conditions which serve as input. They differ, however, in what exactly serves as the output signal. In biochemical computers, the presence or concentration of certain chemicals serves as the output signal. In biomechanical computers, however, the mechanical shape of a specific molecule or set of molecules under a set of initial conditions serves as the output. Biomechanical computers rely on the nature of specific molecules to adopt certain physical configurations under certain chemical conditions. The mechanical, three-dimensional structure of the product of the biomechanical computer is detected and interpreted appropriately as a calculated output.
Bioelectronic computers
Biocomputers can also be constructed in order to perform electronic computing. Again, like both biomechanical and biochemical computers, computations are performed by interpreting a specific output that is based upon an initial set of conditions that serve as input. In bioelectronic computers, the measured output is the nature of the electrical conductivity that is observed in the bioelectronic computer. This output comprises specifically designed biomolecules that conduct electricity in highly specific manners based upon the initial conditions that serve as the input of the bioelectronic system.
Network-based biocomputers
In networks-based biocomputation, self-propelled biological agents, such as molecular motor proteins or bacteria, explore a microscopic network that encodes a mathematical problem of interest. The paths of the agents through the network and/or their final positions represent potential solutions to the problem. For instance, in the system described by Nicolau et al., mobile molecular motor filaments are detected at the "exits" of a network encoding the NP-complete problem SUBSET SUM. All exits visited by filaments represent correct solutions to the algorithm. Exits not visited are non-solutions. The motility proteins are either actin and myosin or kinesin and microtubules. The myosin and kinesin, respectively, are attached to the bottom of the network channels. When adenosine triphosphate (ATP) is added, the actin filaments or microtubules are propelled through the channels, thus exploring the network. The energy conversion from chemical energy (ATP) to mechanical energy (motility) is highly efficient when compared with e.g. electronic computing, so the computer, in addition to being massively parallel, also uses orders of magnitude less energy per computational step.
Engineering biocomputers
The behavior of biologically derived computational systems such as these relies on the particular molecules that make up the system, which are primarily proteins but may also include DNA molecules. Nanobiotechnology provides the means to synthesize the multiple chemical components necessary to create such a system. The chemical nature of a protein is dictated by its sequence of amino acids—the chemical building blocks of proteins. This sequence is in turn dictated by a specific sequence of DNA nucleotides—the building blocks of DNA molecules. Proteins are manufactured in biological systems through the translation of nucleotide sequences by biological molecules called ribosomes, which assemble individual amino acids into polypeptides that form functional proteins based on the nucleotide sequence that the ribosome interprets. What this ultimately means is that one can engineer the chemical components necessary to create a biological system capable of performing computations by engineering DNA nucleotide sequences to encode for the necessary protein components. Also, the synthetically designed DNA molecules themselves may function in a particular biocomputer system. Thus, implementing nanobiotechnology to design and produce synthetically designed proteins—as well as the design and synthesis of artificial DNA molecules—can allow the construction of functional biocomputers (e.g. Computational Genes).
Biocomputers can also be designed with cells as their basic components. Chemically induced dimerization systems can be used to make logic gates from individual cells. These logic gates are activated by chemical agents that induce interactions between previously non-interacting proteins and trigger some observable change in the cell.
Network-based biocomputers are engineered by nanofabrication of the hardware from wafers where the channels are etched by electron-beam lithography or nano-imprint lithography. The channels are designed to have a high aspect ratio of cross section so the protein filaments will be guided. Also, split and pass junctions are engineered so filaments will propagate in the network and explore the allowed paths. Surface silanization ensures that the motility proteins can be affixed to the surface and remain functional. The molecules that perform the logic operations are derived from biological tissue.
Economics
All biological organisms have the ability to self-replicate and self-assemble into functional components. The economical benefit of biocomputers lies in this potential of all biologically derived systems to self-replicate and self-assemble given appropriate conditions. For instance, all of the necessary proteins for a certain biochemical pathway, which could be modified to serve as a biocomputer, could be synthesized many times over inside a biological cell from a single DNA molecule. This DNA molecule could then be replicated many times over. This characteristic of biological molecules could make their production highly efficient and relatively inexpensive. Whereas electronic computers require manual production, biocomputers could be produced in large quantities from cultures without any additional machinery needed to assemble them.
Notable advancements in biocomputer technology
Currently, biocomputers exist with various functional capabilities that include operations of "binary " logic and mathematical calculations. Tom Knight of the MIT Artificial Intelligence Laboratory first suggested a biochemical computing scheme in which protein concentrations are used as binary signals that ultimately serve to perform logical operations. At or above a certain concentration of a particular biochemical product in a biocomputer chemical pathway indicates a signal that is either a 1 or a 0. A concentration below this level indicates the other, remaining signal. Using this method as computational analysis, biochemical computers can perform logical operations in which the appropriate binary output will occur only under specific logical constraints on the initial conditions. In other words, the appropriate binary output serves as a logically derived conclusion from a set of initial conditions that serve as premises from which the logical conclusion can be made. In addition to these types of logical operations, biocomputers have also been shown to demonstrate other functional capabilities, such as mathematical computations. One such example was provided by W.L. Ditto, who in 1999 created a biocomputer composed of leech neurons at Georgia Tech which was capable of performing simple addition. These are just a few of the notable uses that biocomputers have already been engineered to perform, and the capabilities of biocomputers are becoming increasingly sophisticated. Because of the availability and potential economic efficiency associated with producing biomolecules and biocomputers—as noted above—the advancement of the technology of biocomputers is a popular, rapidly growing subject of research that is likely to see much progress in the future.
In March 2013. a team of bioengineers from Stanford University, led by Drew Endy, announced that they had created the biological equivalent of a transistor, which they dubbed a "transcriptor". The invention was the final of the three components necessary to build a fully functional computer: data storage, information transmission, and a basic system of logic.
In July 2017, separate experiments with E. Coli published on Nature showed the potential of using living cells for computing tasks and storing information. A team formed with collaborators of the Biodesign Institute at Arizona State University and Harvard's Wyss Institute for Biologically Inspired Engineering developed a biological computer inside E. Coli that responded to a dozen inputs. The team called the computer "ribocomputer", as it was composed of ribonucleic acid. Harvard researchers proved that it is possible to store information in bacteria after successfully archiving images and movies in the DNA of living E. coli cells.
In 2021, a team led by biophysicist Sangram Bagh realized a study with E. coli to solve 2 x 2 maze problems to probe the principle for distributed computing among cells.
Parallel biological computing with networks, where bio-agent movement corresponds to arithmetical addition was demonstrated in 2016 on a SUBSET SUM instance with 8 candidate solutions.
Future potential of biocomputers
Many examples of simple biocomputers have been designed, but the capabilities of these biocomputers are very limited in comparison to commercially available non-bio computers. Some people believe that biocomputers have great potential, but this has yet to be demonstrated.
The potential to solve complex mathematical problems using far less energy than standard electronic supercomputers, as well as to perform more reliable calculations simultaneously rather than sequentially, motivates the further development of "scalable" biological computers, and several funding agencies are supporting these efforts.
See also
Biotechnology
Computational gene
Computer
DNA computing
Human biocomputer
Molecular electronics
Nanotechnology
Nanobiotechnology
Peptide computing
Wetware computer
Unconventional computing
References
Nanotechnology
Biotechnology
Models of computation | 0.780646 | 0.990758 | 0.773432 |
Isoleucine | Isoleucine (symbol Ile or I) is an α-amino acid that is used in the biosynthesis of proteins. It contains an α-amino group (which is in the protonated −NH+3 form under biological conditions), an α-carboxylic acid group (which is in the deprotonated −COO− form under biological conditions), and a hydrocarbon side chain with a branch (a central carbon atom bound to three other carbon atoms). It is classified as a non-polar, uncharged (at physiological pH), branched-chain, aliphatic amino acid. It is essential in humans, meaning the body cannot synthesize it. Essential amino acids are necessary in the human diet. In plants isoleucine can be synthesized from threonine and methionine. In plants and bacteria, isoleucine is synthesized from pyruvate employing leucine biosynthesis enzymes. It is encoded by the codons AUU, AUC, and AUA.
Metabolism
Biosynthesis
In plants and microorganisms, isoleucine is synthesized from pyruvate and alpha-ketobutyrate. This pathway is not present in humans. Enzymes involved in this biosynthesis include:
Acetolactate synthase (also known as acetohydroxy acid synthase)
Acetohydroxy acid isomeroreductase
Dihydroxyacid dehydratase
Valine aminotransferase
Catabolism
Isoleucine is both a glucogenic and a ketogenic amino acid. After transamination with alpha-ketoglutarate, the carbon skeleton is oxidised and split into propionyl-CoA and acetyl-CoA. Propionyl-CoA is converted into succinyl-CoA, a TCA cycle intermediate which can be converted into oxaloacetate for gluconeogenesis (hence glucogenic). In mammals acetyl-CoA cannot be converted to carbohydrate but can be either fed into the TCA cycle by condensing with oxaloacetate to form citrate or used in the synthesis of ketone bodies (hence ketogenic) or fatty acids.
Metabolic diseases
The degradation of isoleucine is impaired in the following metabolic diseases:
Combined malonic and methylmalonic aciduria (CMAMMA)
Maple syrup urine disease (MSUD)
Methylmalonic acidemia
Propionic acidemia
Insulin resistance
Isoleucine, like other branched-chain amino acids, is associated with insulin resistance: higher levels of isoleucine are observed in the blood of diabetic mice, rats, and humans. In diet-induced obese and insulin resistant mice, a diet with decreased levels of isoleucine (with or without the other branched-chain amino acids) results in reduced adiposity and improved insulin sensitivity. Reduced dietary levels of isoleucine are required for the beneficial metabolic effects of a low protein diet. In humans, a protein restricted diet lowers blood levels of isoleucine and decreases fasting blood glucose levels. Mice fed a low isoleucine diet are leaner, live longer, and are less frail. In humans, higher dietary levels of isoleucine are associated with greater body mass index.
Functions and requirement
The Food and Nutrition Board (FNB) of the U.S. Institute of Medicine has set Recommended Dietary Allowances (RDAs) for essential amino acids in 2002. For adults 19 years and older, 19 mg of isoleucine/kg body weight is required daily.
Beside its biological role as a nutrient, isoleucine also participates in regulation of glucose metabolism. Isoleucine is an essential component of many proteins. As an essential amino acid, isoleucine must be ingested or protein production in the cell will be disrupted. Fetal hemoglobin is one of the many proteins that require isoleucine. Isoleucine is present in the gamma chain of fetal hemoglobin and must be present for the protein to form.
Genetic diseases can change the consumption requirements of isoleucine. Amino acids cannot be stored in the body. Buildup of excess amino acids will cause a buildup of toxic molecules so, humans have many pathways to degrade each amino acid when the need for protein synthesis has been met. Mutations in isoleucine-degrading enzymes can lead to dangerous buildup of isoleucine and its toxic derivative. One example is maple syrup urine disease (MSUD), a disorder that leaves people unable to breakdown isoleucine, valine, and leucine. People with MSUD manage their disease by a reduced intake of all three of those amino acids alongside drugs that help excrete built-up toxins.
Many animals and plants are dietary sources of isoleucine as a component of proteins. Foods that have high amounts of isoleucine include eggs, soy protein, seaweed, turkey, chicken, lamb, cheese, and fish.
Synthesis
Routes to isoleucine are numerous. One common multistep procedure starts from 2-bromobutane and diethylmalonate. Synthetic isoleucine was first reported in 1905 by French chemists Bouveault and Locquin.
Discovery
German chemist Felix Ehrlich discovered isoleucine while studying the composition of beet-sugar molasses 1903. In 1907 Ehrlich carried out further studies on fibrin, egg albumin, gluten, and beef muscle in 1907. These studies verified the natural composition of isoleucine. Ehrlich published his own synthesis of isoleucine in 1908.
See also
Alloisoleucine, the diasteromer of isoleucine
Low Isoleucine protein foods
References
External links
Isoleucine degradation
Isoleucine biosynthesis
Alpha-Amino acids
Proteinogenic amino acids
Glucogenic amino acids
Ketogenic amino acids
Branched-chain amino acids
Essential amino acids | 0.778649 | 0.993295 | 0.773428 |
Residue (chemistry) | In chemistry, residue is whatever remains or acts as a contaminant after a given class of events. Residue may be the material remaining after a process of preparation, separation, or purification, such as distillation, evaporation, or filtration. It may also denote the undesired by-products of a chemical reaction.
Residues as an undesired by-product are a concern in agricultural and food industries.
Food safety
Toxic chemical residues, wastes or contamination from other processes, are a concern in food safety. The most common food residues originate from pesticides, veterinary drugs, and industrial chemicals. For example, the U.S. Food and Drug Administration (FDA) and the Canadian Food Inspection Agency (CFIA) have guidelines for detecting chemical residues that are possibly dangerous to consume. In the U.S., the FDA is responsible for setting guidelines while other organizations enforce them.
Environmental concerns
Similar to the food industry, in environmental sciences residue also refers to chemical contaminants. Residues in the environment are often the result of industrial processes, such as escaped chemicals from mining processing, fuel leaks during industrial transportation, trace amounts of radioactive material, and excess pesticides that enter the soil.
Characteristic units within a molecule
Residue may refer to an atom or a group of atoms that form part of a molecule, such as a methyl group.
Biochemistry
In biochemistry and molecular biology, a residue refers to a specific monomer within the polymeric chain of a polysaccharide, protein or nucleic acid.
In proteins, the carboxyl group of one amino acid links with the amino group of another amino acid to form a peptide. This results in the removal of water and what remains is called the residue. Naming of residues is done by replacing "acid" with "residue". A residue's properties will influence interactions with other residues and the overall chemical properties of the protein it resides in. One might say, "This protein consists of 118 amino acid residues" or "The histidine residue is considered to be basic because it contains an imidazole ring." Note that a residue is different from a moiety, which, in the above example would be constituted by the imidazole ring or the imidazole moiety.
References
Distillation | 0.787904 | 0.981605 | 0.77341 |
Lexical semantics | Lexical semantics (also known as lexicosemantics), as a subfield of linguistic semantics, is the study of word meanings. It includes the study of how words structure their meaning, how they act in grammar and compositionality, and the relationships between the distinct senses and uses of a word.
The units of analysis in lexical semantics are lexical units which include not only words but also sub-words or sub-units such as affixes and even compound words and phrases. Lexical units include the catalogue of words in a language, the lexicon. Lexical semantics looks at how the meaning of the lexical units correlates with the structure of the language or syntax. This is referred to as syntax-semantics interface.
The study of lexical semantics concerns:
the classification and decomposition of lexical items
the differences and similarities in lexical semantic structure cross-linguistically
the relationship of lexical meaning to sentence meaning and syntax.
Lexical units, also referred to as syntactic atoms, can be independent such as in the case of root words or parts of compound words or they require association with other units, as prefixes and suffixes do. The former are termed free morphemes and the latter bound morphemes. They fall into a narrow range of meanings (semantic fields) and can combine with each other to generate new denotations.
Cognitive semantics is the linguistic paradigm/framework that since the 1980s has generated the most studies in lexical semantics, introducing innovations like prototype theory, conceptual metaphors, and frame semantics.
Lexical relations
Lexical items contain information about category (lexical and syntactic), form and meaning. The semantics related to these categories then relate to each lexical item in the lexicon. Lexical items can also be semantically classified based on whether their meanings are derived from single lexical units or from their surrounding environment.
Lexical items participate in regular patterns of association with each other. Some relations between lexical items include hyponymy, hypernymy, synonymy, and antonymy, as well as homonymy.
Hyponymy and hypernymy
Hyponymy and hypernymy refer to a relationship between a general term and the more specific terms that fall under the category of the general term.
For example, the colors red, green, blue and yellow are hyponyms. They fall under the general term of color, which is the hypernym.
Hyponyms and hypernyms can be described by using a taxonomy, as seen in the example.
Synonym
Synonym refers to words that are pronounced and spelled differently but contain the same meaning.
Antonym
Antonym refers to words that are related by having the opposite meanings to each other.
There are three types of antonyms: graded antonyms, complementary antonyms, and relational antonyms.
Homonymy
Homonymy refers to the relationship between words that are spelled or pronounced the same way but hold different meanings.
Polysemy
Polysemy refers to a word having two or more related meanings.
Semantic networks
Lexical semantics also explores whether the meaning of a lexical unit is established by looking at its neighbourhood in the semantic network, (words it occurs with in natural sentences), or whether the meaning is already locally contained in the lexical unit.
In English, WordNet is an example of a semantic network. It contains English words that are grouped into synsets. Some semantic relations between these synsets are meronymy, hyponymy, synonymy, and antonymy.
Semantic fields
How lexical items map onto concepts
First proposed by Trier in the 1930s, semantic field theory proposes that a group of words with interrelated meanings can be categorized under a larger conceptual domain. This entire entity is thereby known as a semantic field. The words boil, bake, fry, and roast, for example, would fall under the larger semantic category of cooking. Semantic field theory asserts that lexical meaning cannot be fully understood by looking at a word in isolation, but by looking at a group of semantically related words. Semantic relations can refer to any relationship in meaning between lexemes, including synonymy (big and large), antonymy (big and small), hypernymy and hyponymy (rose and flower), converseness (buy and sell), and incompatibility. Semantic field theory does not have concrete guidelines that determine the extent of semantic relations between lexemes. The abstract validity of the theory is a subject of debate.
Knowing the meaning of a lexical item therefore means knowing the semantic entailments the word brings with it. However, it is also possible to understand only one word of a semantic field without understanding other related words. Take, for example, a taxonomy of plants and animals: it is possible to understand the words rose and rabbit without knowing what a marigold or a muskrat is. This is applicable to colors as well, such as understanding the word red without knowing the meaning of scarlet, but understanding scarlet without knowing the meaning of red may be less likely. A semantic field can thus be very large or very small, depending on the level of contrast being made between lexical items. While cat and dog both fall under the larger semantic field of animal, including the breed of dog, like German shepherd, would require contrasts between other breeds of dog (e.g. corgi, or poodle), thus expanding the semantic field further.
How lexical items map onto events
Event structure is defined as the semantic relation of a verb and its syntactic properties.
Event structure has three primary components:
primitive event type of the lexical item
event composition rules
mapping rules to lexical structure
Verbs can belong to one of three types: states, processes, or transitions.
(1a) defines the state of the door being closed; there is no opposition in this predicate. (1b) and (1c) both have predicates showing transitions of the door going from being implicitly open to closed. (1b) gives the intransitive use of the verb close, with no explicit mention of the causer, but (1c) makes explicit mention of the agent involved in the action.
Syntactic basis of event structure: a brief history
Generative semantics in the 1960s
The analysis of these different lexical units had a decisive role in the field of "generative linguistics" during the 1960s. The term generative was proposed by Noam Chomsky in his book Syntactic Structures published in 1957. The term generative linguistics was based on Chomsky's generative grammar, a linguistic theory that states systematic sets of rules (X' theory) can predict grammatical phrases within a natural language. Generative Linguistics is also known as Government-Binding Theory.
Generative linguists of the 1960s, including Noam Chomsky and Ernst von Glasersfeld, believed semantic relations between transitive verbs and intransitive verbs were tied to their independent syntactic organization. This meant that they saw a simple verb phrase as encompassing a more complex syntactic structure.
Lexicalist theories in the 1980s
Lexicalist theories became popular during the 1980s, and emphasized that a word's internal structure was a question of morphology and not of syntax. Lexicalist theories emphasized that complex words (resulting from compounding and derivation of affixes) have lexical entries that are derived from morphology, rather than resulting from overlapping syntactic and phonological properties, as Generative Linguistics predicts. The distinction between Generative Linguistics and Lexicalist theories can be illustrated by considering the transformation of the word destroy to destruction:
Generative Linguistics theory: states the transformation of destroy → destruction as the nominal, nom + destroy, combined with phonological rules that produce the output destruction. Views this transformation as independent of the morphology.
Lexicalist theory: sees destroy and destruction as having idiosyncratic lexical entries based on their differences in morphology. Argues that each morpheme contributes specific meaning. States that the formation of the complex word destruction is accounted for by a set of Lexical Rules, which are different and independent from syntactic rules.
A lexical entry lists the basic properties of either the whole word, or the individual properties of the morphemes that make up the word itself. The properties of lexical items include their category selection c-selection, selectional properties s-selection, (also known as semantic selection), phonological properties, and features. The properties of lexical items are idiosyncratic, unpredictable, and contain specific information about the lexical items that they describe.
The following is an example of a lexical entry for the verb put:
Lexicalist theories state that a word's meaning is derived from its morphology or a speaker's lexicon, and not its syntax. The degree of morphology's influence on overall grammar remains controversial. Currently, the linguists that perceive one engine driving both morphological items and syntactic items are in the majority.
Micro-syntactic theories: 1990s to the present
By the early 1990s, Chomsky's minimalist framework on language structure led to sophisticated probing techniques for investigating languages. These probing techniques analyzed negative data over prescriptive grammars, and because of Chomsky's proposed Extended Projection Principle in 1986, probing techniques showed where specifiers of a sentence had moved to in order to fulfill the EPP. This allowed syntacticians to hypothesize that lexical items with complex syntactic features (such as ditransitive, inchoative, and causative verbs), could select their own specifier element within a syntax tree construction. (For more on probing techniques, see Suci, G., Gammon, P., & Gamlin, P. (1979)).
This brought the focus back on the syntax-lexical semantics interface; however, syntacticians still sought to understand the relationship between complex verbs and their related syntactic structure, and to what degree the syntax was projected from the lexicon, as the Lexicalist theories argued.
In the mid 1990s, linguists Heidi Harley, Samuel Jay Keyser, and Kenneth Hale addressed some of the implications posed by complex verbs and a lexically-derived syntax. Their proposals indicated that the predicates CAUSE and BECOME, referred to as subunits within a Verb Phrase, acted as a lexical semantic template. Predicates are verbs and state or affirm something about the subject of the sentence or the argument of the sentence. For example, the predicates went and is here below affirm the argument of the subject and the state of the subject respectively.
The subunits of Verb Phrases led to the Argument Structure Hypothesis and Verb Phrase Hypothesis, both outlined below. The recursion found under the "umbrella" Verb Phrase, the VP Shell, accommodated binary-branching theory; another critical topic during the 1990s. Current theory recognizes the predicate in Specifier position of a tree in inchoative/anticausative verbs (intransitive), or causative verbs (transitive) is what selects the theta role conjoined with a particular verb.
Hale & Keyser 1990
Kenneth Hale and Samuel Jay Keyser introduced their thesis on lexical argument structure during the early 1990s.
They argue that a predicate's argument structure is represented in the syntax, and that the syntactic representation of the predicate is a lexical projection of its arguments. Thus, the structure of a predicate is strictly a lexical representation, where each phrasal head projects its argument onto a phrasal level within the syntax tree. The selection of this phrasal head is based on Chomsky's Empty Category Principle. This lexical projection of the predicate's argument onto the syntactic structure is the foundation for the Argument Structure Hypothesis. This idea coincides with Chomsky's Projection Principle, because it forces a VP to be selected locally and be selected by a Tense Phrase (TP).
Based on the interaction between lexical properties, locality, and the properties of the EPP (where a phrasal head selects another phrasal element locally), Hale and Keyser make the claim that the Specifier position or a complement are the only two semantic relations that project a predicate's argument. In 2003, Hale and Keyser put forward this hypothesis and argued that a lexical unit must have one or the other, Specifier or Complement, but cannot have both.
Halle & Marantz 1993
Morris Halle and Alec Marantz introduced the notion of distributed morphology in 1993. This theory views the syntactic structure of words as a result of morphology and semantics, instead of the morpho-semantic interface being predicted by the syntax. Essentially, the idea that under the Extended Projection Principle there is a local boundary under which a special meaning occurs. This meaning can only occur if a head-projecting morpheme is present within the local domain of the syntactic structure. The following is an example of the tree structure proposed by distributed morphology for the sentence "John's destroying the city". Destroy is the root, V-1 represents verbalization, and D represents nominalization.
Ramchand 2008
In her 2008 book, Verb Meaning and The Lexicon: A First-Phase Syntax, linguist Gillian Ramchand acknowledges the roles of lexical entries in the selection of complex verbs and their arguments. 'First-Phase' syntax proposes that event structure and event participants are directly represented in the syntax by means of binary branching. This branching ensures that the Specifier is the consistently subject, even when investigating the projection of a complex verb's lexical entry and its corresponding syntactic construction. This generalization is also present in Ramchand's theory that the complement of a head for a complex verb phrase must co-describe the verb's event.
Ramchand also introduced the concept of Homomorphic Unity, which refers to the structural synchronization between the head of a complex verb phrase and its complement. According to Ramchand, Homomorphic Unity is "when two event descriptors are syntactically Merged, the structure of the complement must unify with the structure of the head."
Classification of event types
Intransitive verbs: unaccusative versus unergative
The unaccusative hypothesis was put forward by David Perlmutter in 1987, and describes how two classes of intransitive verbs have two different syntactic structures. These are unaccusative verbs and unergative verbs. These classes of verbs are defined by Perlmutter only in syntactic terms. They have the following structures underlyingly:
unaccusative verb: __ [VP V NP]
unergative verb: NP [VP V]
The following is an example from English:
In (2a) the verb underlyingly takes a direct object, while in (2b) the verb underlyingly takes a subject.
Transitivity alternations: the inchoative/causative alternation
The change-of-state property of Verb Phrases (VP) is a significant observation for the syntax of lexical semantics because it provides evidence that subunits are embedded in the VP structure, and that the meaning of the entire VP is influenced by this internal grammatical structure. (For example, the VP the vase broke carries a change-of-state meaning of the vase becoming broken, and thus has a silent BECOME subunit within its underlying structure.) There are two types of change-of-state predicates: inchoative and causative.
Inchoative verbs are intransitive, meaning that they occur without a direct object, and these verbs express that their subject has undergone a certain change of state. Inchoative verbs are also known as anticausative verbs. Causative verbs are transitive, meaning that they occur with a direct object, and they express that the subject causes a change of state in the object.
Linguist Martin Haspelmath classifies inchoative/causative verb pairs under three main categories: causative, anticausative, and non-directed alternations. Non-directed alternations are further subdivided into labile, equipollent, and suppletive alternations.
English tends to favour labile alternations, meaning that the same verb is used in the inchoative and causative forms. This can be seen in the following example: broke is an intransitive inchoative verb in (3a) and a transitive causative verb in (3b).
As seen in the underlying tree structure for (3a), the silent subunit BECOME is embedded within the Verb Phrase (VP), resulting in the inchoative change-of-state meaning (y become z). In the underlying tree structure for (3b), the silent subunits CAUS and BECOME are both embedded within the VP, resulting in the causative change-of-state meaning (x cause y become z).
English change of state verbs are often de-adjectival, meaning that they are derived from adjectives. We can see this in the following example:
In example (4a) we start with a stative intransitive adjective, and derive (4b) where we see an intransitive inchoative verb. In (4c) we see a transitive causative verb.
Marked inchoatives
Some languages (e.g., German, Italian, and French), have multiple morphological classes of inchoative verbs. Generally speaking, these languages separate their inchoative verbs into three classes: verbs that are obligatorily unmarked (they are not marked with a reflexive pronoun, clitic, or affix), verbs that are optionally marked, and verbs that are obligatorily marked. The causative verbs in these languages remain unmarked. Haspelmath refers to this as the anticausative alternation.
For example, inchoative verbs in German are classified into three morphological classes. Class A verbs necessarily form inchoatives with the reflexive pronoun , Class B verbs form inchoatives necessarily without the reflexive pronoun, and Class C verbs form inchoatives optionally with or without the reflexive pronoun. In example (5), the verb is an unmarked inchoative verb from Class B, which also remains unmarked in its causative form.
German
In contrast, the verb öffnete is a Class A verb which necessarily takes the reflexive pronoun sich in its inchoative form, but remains unmarked in its causative form.
German
There has been some debate as to whether the different classes of inchoative verbs are purely based in morphology, or whether the differentiation is derived from the lexical-semantic properties of each individual verb. While this debate is still unresolved in languages such as Italian, French, and Greek, it has been suggested by linguist Florian Schäfer that there are semantic differences between marked and unmarked inchoatives in German. Specifically, that only unmarked inchoative verbs allow an unintentional causer reading (meaning that they can take on an "x unintentionally caused y" reading).
Marked causatives
Causative morphemes are present in the verbs of many languages (e.g., Tagalog, Malagasy, Turkish, etc.), usually appearing in the form of an affix on the verb. This can be seen in the following examples from Tagalog, where the causative prefix pag- (realized here as nag) attaches to the verb tumba to derive a causative transitive verb in (7b), but the prefix does not appear in the inchoative intransitive verb in (7a). Haspelmath refers to this as the causative alternation.
Tagalog
Ditransitive verbs
Kayne's 1981 unambiguous path analysis
Richard Kayne proposed the idea of unambiguous paths as an alternative to c-commanding relationships, which is the type of structure seen in examples (8). The idea of unambiguous paths stated that an antecedent and an anaphor should be connected via an unambiguous path. This means that the line connecting an antecedent and an anaphor cannot be broken by another argument. When applied to ditransitive verbs, this hypothesis introduces the structure in diagram (8a). In this tree structure it can be seen that the same path can be traced from either DP to the verb. Tree diagram (7b) illustrates this structure with an example from English. This analysis was a step toward binary branching trees, which was a theoretical change that was furthered by Larson's VP-shell analysis.
Larson's 1988 "VP-shell" analysis
Larson posited his Single Complement Hypothesis in which he stated that every complement is introduced with one verb. The Double Object Construction presented in 1988 gave clear evidence of a hierarchical structure using asymmetrical binary branching. Sentences with double objects occur with ditransitive verbs, as we can see in the following example:
It appears as if the verb send has two objects, or complements (arguments): both Mary, the recipient and parcel, the theme. The argument structure of ditransitive verb phrases is complex and has undergone different structural hypothesis.
The original structural hypothesis was that of ternary branching seen in (9a) and (9b), but following from Kayne's 1981 analysis, Larson maintained that each complement is introduced by a verb.
Their hypothesis shows that there is a lower verb embedded within a VP shell that combines with an upper verb (can be invisible), thus creating a VP shell (as seen in the tree diagram to the right). Most current theories no longer allow the ternary tree structure of (9a) and (9b), so the theme and the goal/recipient are seen in a hierarchical relationship within a binary branching structure.
Following are examples of Larson's tests to show that the hierarchical (superior) order of any two objects aligns with a linear order, so that the second is governed (c-commanded) by the first. This is in keeping with X'Bar Theory of Phrase Structure Grammar, with Larson's tree structure using the empty Verb to which the V is raised.
Reflexives and reciprocals (anaphors) show this relationship in which they must be c-commanded by their antecedents, such that the (10a) is grammatical but (10b) is not:
A pronoun must have a quantifier as its antecedent:
Question words follow this order:
The effect of negative polarity means that "any" must have a negative quantifier as an antecedent:
These tests with ditransitive verbs that confirm c-command also confirm the presence of underlying or invisible causative verbs. In ditransitive verbs such as give someone something, send someone something, show someone something etc. there is an underlying causative meaning that is represented in the underlying structure. As seen in example in (9a) above, John sent Mary a package, there is the underlying meaning that 'John "caused" Mary to have a package'.
Larson proposed that both sentences in (9a) and (9b) share the same underlying structure and the difference on the surface lies in that the double object construction "John sent Mary a package" is derived by transformation from a NP plus PP construction "John sent a package to Mary".
Beck & Johnson's 2004 double object construction
Beck and Johnson, however, give evidence that the two underlying structures are not the same. In so doing, they also give further evidence of the presence of two VPs where the verb attaches to a causative verb. In examples (14a) and (b), each of the double object constructions are alternated with NP + PP constructions.
Beck and Johnson show that the object in (15a) has a different relation to the motion verb as it is not able to carry the meaning of HAVING which the possessor (9a) and (15a) can. In (15a), Satoshi is an animate possessor and so is caused to HAVE kisimen. The PP for Satoshi in (15b) is of a benefactive nature and does not necessarily carry this meaning of HAVE either.
The underlying structures are therefore not the same. The differences lie in the semantics and the syntax of the sentences, in contrast to the transformational theory of Larson. Further evidence for the structural existence of VP shells with an invisible verbal unit is given in the application of the adjunct or modifier "again". Sentence (16) is ambiguous and looking into the two different meanings reveals a difference in structure.
However, in (17a), it is clear that it was Sally who repeated the action of opening the door. In (17b), the event is in the door being opened and Sally may or may not have opened it previously. To render these two different meanings, "again" attaches to VPs in two different places, and thus describes two events with a purely structural change.
See also
Content word
Lexical analysis
Lexical chain
Lexicalization
Lexical markup framework
Lexical verb
Minimal recursion semantics
Ontology
Polysemy
Semantic primes
Semantic satiation
SemEval
Thematic role
Troponymy
Word sense
Word-sense disambiguation
References
External links
Semantics
Formal semantics (natural language)
Syntax–semantics interface | 0.781571 | 0.98955 | 0.773404 |
Dynamic stochastic general equilibrium | Dynamic stochastic general equilibrium modeling (abbreviated as DSGE, or DGE, or sometimes SDGE) is a macroeconomic method which is often employed by monetary and fiscal authorities for policy analysis, explaining historical time-series data, as well as future forecasting purposes. DSGE econometric modelling applies general equilibrium theory and microeconomic principles in a tractable manner to postulate economic phenomena, such as economic growth and business cycles, as well as policy effects and market shocks.
Terminology
As a practical matter, people often use the term "DSGE models" to refer to a particular class of classically quantitative econometric models of business cycles or economic growth called real business cycle (RBC) models. DSGE models were initially proposed by Kydland & Prescott, and Long & Plosser; Charles Plosser described RBC models as a precursor for DSGE modeling.
As mentioned in the Introduction, DSGE models are the predominant framework of macroeconomic analysis. They are multifaceted, and their combination of micro-foundations and optimising economic behaviour of rational agents allows for a comprehensive analysis of macro effects. As indicated by their name, their defining characteristics are as follows:
Dynamic: The effect of current choices on future uncertainty makes the models dynamic and assigns a certain relevance to the expectations of agents in forming macroeconomic outcomes.
Stochastic: The models take into consideration the transmission of random shocks into the economy and the consequent economic fluctuations.
General: referring to the entire economy as a whole (within the model) in that price levels and output levels are determined jointly. This is opposed to a partial equilibrium, where price levels are taken as given and only output levels are determined within the model economy.
Equilibrium: In accordance with Léon Walras's General Competitive Equilibrium Theory, the model captures the interaction between policy actions and behaviour of agents.
RBC modeling
The formulation and analysis of monetary policy has undergone significant evolution in recent decades and the development of DSGE models has played a key role in this process. As was aforementioned DSGE models are seen to be an update of RBC (real business cycle) models.
Early real business-cycle models postulated an economy populated by a representative consumer who operates in perfectly competitive markets. The only sources of uncertainty in these models are "shocks" in technology. RBC theory builds on the neoclassical growth model, under the assumption of flexible prices, to study how real shocks to the economy might cause business cycle fluctuations.
The "representative consumer" assumption can either be taken literally or reflect a Gorman aggregation of heterogenous consumers who are facing idiosyncratic income shocks and complete markets in all assets. These models took the position that fluctuations in aggregate economic activity are actually an "efficient response" of the economy to exogenous shocks.
The models were criticized on a number of issues:
Microeconomic data cast doubt on some of the key assumptions of the model, such as: perfect credit- and insurance-markets; perfectly friction-less labour markets; etc.
They had difficulty in accounting for some key properties of the aggregate data, such as: the observed volatility in hours worked; the equity premium; etc.
Open-economy versions of these models failed to account for observations such as: the cyclical movement of consumption and output across countries; the extremely high correlation between nominal and real exchange rates; etc.
They are mute on many policy related issues of importance to macroeconomists and policy makers, such as the consequences of different monetary policy rules for aggregate economic activity.
The Lucas critique
In a 1976 paper, Robert Lucas argued that it is naive to try to predict the effects of a change in economic policy entirely on the basis of relationships observed in historical data, especially highly aggregated historical data. Lucas claimed that the decision rules of Keynesian models, such as the fiscal multiplier, cannot be considered as structural, in the sense that they cannot be invariant with respect to changes in government policy variables, stating:
Given that the structure of an econometric model consists of optimal decision-rules of economic agents, and that optimal decision-rules vary systematically with changes in the structure of series relevant to the decision maker, it follows that any change in policy will systematically alter the structure of econometric models.
This meant that, because the parameters of the models were not structural, i.e. not indifferent to policy, they would necessarily change whenever policy was changed. The so-called Lucas critique followed similar criticism undertaken earlier by Ragnar Frisch, in his critique of Jan Tinbergen's 1939 book Statistical Testing of Business-Cycle Theories, where Frisch accused Tinbergen of not having discovered autonomous relations, but "coflux" relations, and by Jacob Marschak, in his 1953 contribution to the Cowles Commission Monograph, where he submitted that
In predicting the effect of its decisions (policies), the government...has to take account of exogenous variables, whether controlled by it (the decisions themselves, if they are exogenous variables) or uncontrolled (e.g. weather), and of structural changes, whether controlled by it (the decisions themselves, if they change the structure) or uncontrolled (e.g. sudden changes in people's attitude).
The Lucas critique is representative of the paradigm shift that occurred in macroeconomic theory in the 1970s towards attempts at establishing micro-foundations.
Response to the Lucas critique
In the 1980s, macro models emerged that attempted to directly respond to Lucas through the use of rational expectations econometrics.
In 1982, Finn E. Kydland and Edward C. Prescott created a real business cycle (RBC) model to "predict the consequence of a particular policy rule upon the operating characteristics of the economy." The stated, exogenous, stochastic components in their model are "shocks to technology" and "imperfect indicators of productivity." The shocks involve random fluctuations in the productivity level, which shift up or down the trend of economic growth. Examples of such shocks include innovations, the weather, sudden and significant price increases in imported energy sources, stricter environmental regulations, etc. The shocks directly change the effectiveness of capital and labour, which, in turn, affects the decisions of workers and firms, who then alter what they buy and produce. This eventually affects output.
The authors stated that, since fluctuations in employment are central to the business cycle, the "stand-in consumer [of the model] values not only consumption but also leisure," meaning that unemployment movements essentially reflect the changes in the number of people who want to work. "Household-production theory," as well as "cross-sectional evidence" ostensibly support a "non-time-separable utility function that admits greater inter-temporal substitution of leisure, something which is needed," according to the authors, "to explain aggregate movements in employment in an equilibrium model." For the K&P model, monetary policy is irrelevant for economic fluctuations.
The associated policy implications were clear: There is no need for any form of government intervention since, ostensibly, government policies aimed at stabilizing the business cycle are welfare-reducing. Since microfoundations are based on the preferences of decision-makers in the model, DSGE models feature a natural benchmark for evaluating the welfare effects of policy changes. Furthermore, the integration of such microfoundations in DSGE modeling enables the model to accurately adjust to shifts in fundamental behaviour of agents and is thus regarded as an "impressive response" to the Lucas critique. The Kydland/Prescott 1982 paper is often considered the starting point of RBC theory and of DSGE modeling in general and its authors were awarded the 2004 Bank of Sweden Prize in Economic Sciences in Memory of Alfred Nobel.
DSGE modeling
Structure
By applying dynamic principles, dynamic stochastic general equilibrium models contrast with the static models studied in applied general equilibrium models and some computable general equilibrium models.
DSGE models employed by governments and central banks for policy analysis are relatively simple. Their structure is built around three interrelated sections including that of demand, supply, and the monetary policy equation. These three sections are formally defined by micro-foundations and make explicit assumptions about the behavior of the main economic agents in the economy, i.e. households, firms, and the government. The interaction of the agents in markets cover every period of the business cycle which ultimately qualifies the "general equilibrium" aspect of this model. The preferences (objectives) of the agents in the economy must be specified. For example, households might be assumed to maximize a utility function over consumption and labor effort. Firms might be assumed to maximize profits and to have a production function, specifying the amount of goods produced, depending on the amount of labor, capital and other inputs they employ. Technological constraints on firms' decisions might include costs of adjusting their capital stocks, their employment relations, or the prices of their products.
Below is an example of the set of assumptions a DSGE is built upon:
Perfect competition in all markets
All prices adjust instantaneously
Rational expectations
No asymmetric information
The competitive equilibrium is Pareto optimal
Firms are identical and price takers
Infinitely lived identical price-taking households
to which the following frictions are added:
Distortionary taxes (Labour taxes) – to account for not lump-sum taxation
Habit persistence (the period utility function depends on a quasi-difference of consumption)
Adjustment costs on investments – to make investments less volatile
Labour adjustment costs – to account for costs firms face when changing the level of employment
The models' general equilibrium nature is presumed to capture the interaction between policy actions and agents' behavior, while the models specify assumptions about the stochastic shocks that give rise to economic fluctuations. Hence, the models are presumed to "trace more clearly the shocks' transmission to the economy." This is exemplified in the below explanation of a simplified DSGE model.
Demand defines real activity as a function of the nominal interest rate minus expected inflation, and of expectations regarding future real activity.
The demand block confirms the general economic principle that temporarily high interest rates encourage people and firms to save instead of consuming/investing; as well as suggesting the likelihood of increased current spending under the expectation of promising future prospects, regardless of rate level.
Supply is dependent on demand through the input of the level of activity, which impacts the determination of inflation.
E.g. In times of high activity, firms are required increase the wage rate in order to encourage employees to work greater hours which leads to a general increase in marginal costs and thus a subsequent increase in future expectation and current inflation.
The demand and supply sections simultaneously contribute to a determination of monetary policy. The formal equation specified in this section describes the conditions under which the central bank determines the nominal interest rate.
As such, general central bank behaviour is reflected through this i.e. raising the bank rate (short-term interest rates) in periods of rapid or unsustainable growth and vice versa.
There is a final flow from monetary policy towards demand representing the impact of adjustments in nominal interest rates on real activity and subsequently inflation.
As such a complete simplified model of the relationship between three key features is defined. This dynamic interaction between the endogenous variables of output, inflation, and the nominal interest rate, is fundamental in DSGE modelling.
Schools
Two schools of analysis form the bulk of DSGE modeling: the classic RBC models, and the New-Keynesian DSGE models that build on a structure similar to RBC models, but instead assume that prices are set by monopolistically competitive firms, and cannot be instantaneously and costlessly adjusted. Rotemberg & Woodford introduced this framework in 1997. Introductory and advanced textbook presentations of DSGE modeling are given by Galí (2008) and Woodford (2003). Monetary policy implications are surveyed by Clarida, Galí, and Gertler (1999).
The European Central Bank (ECB) has developed a DSGE model, called the Smets–Wouters model, which it uses to analyze the economy of the Eurozone as a whole. The Bank's analysts state that
developments in the construction, simulation and estimation of DSGE models have made it possible to combine a rigorous microeconomic derivation of the behavioural equations of macro models with an empirically plausible calibration or estimation which fits the main features of the macroeconomic time series.
The main difference between "empirical" DSGE models and the "more traditional macroeconometric models, such as the Area-Wide Model", according to the ECB, is that "both the parameters and the shocks to the structural equations are related to deeper structural parameters describing household preferences and technological and institutional constraints."
The Smets-Wouters model uses seven Eurozone area macroeconomic series: real GDP; consumption; investment; employment; real wages; inflation; and the nominal, short-term interest rate. Using Bayesian estimation and validation techniques, the bank's modeling is ostensibly able to compete with "more standard, unrestricted time series models, such as vector autoregression, in out-of-sample forecasting."
Criticism
Bank of Lithuania Deputy Chairman Raimondas Kuodis disputes the very title of DSGE analysis: The models, he claims, are neither dynamic (since they contain no evolution of stocks of financial assets and liabilities), stochastic (because we live in the world of Knightian uncertainty and, since future outcomes or possible choices are unknown, then risk analysis or expected utility theory are not very helpful), general (they lack a full accounting framework, a stock-flow consistent framework, which would significantly reduce the number of degrees of freedom in the economy), or even about equilibrium (since markets clear only in a few quarters).
Willem Buiter, Citigroup Chief Economist, has argued that DSGE models rely excessively on an assumption of complete markets, and are unable to describe the highly nonlinear dynamics of economic fluctuations, making training in 'state-of-the-art' macroeconomic modeling "a privately and socially costly waste of time and resources". Narayana Kocherlakota, President of the Federal Reserve Bank of Minneapolis, wrote that
many modern macro models...do not capture an intermediate messy reality in which market participants can trade multiple assets in a wide array of somewhat segmented markets. As a consequence, the models do not reveal much about the benefits of the massive amount of daily or quarterly re-allocations of wealth within financial markets. The models also say nothing about the relevant costs and benefits of resulting fluctuations in financial structure (across bank loans, corporate debt, and equity).
N. Gregory Mankiw, regarded as one of the founders of New Keynesian DSGE modeling, has argued that
New classical and New Keynesian research has had little impact on practical macroeconomists who are charged with [...] policy. [...] From the standpoint of macroeconomic engineering, the work of the past several decades looks like an unfortunate wrong turn.
In the 2010 United States Congress hearings on macroeconomic modeling methods, held on 20 July 2010, and aiming to investigate why macroeconomists failed to foresee the financial crisis of 2007-2010, MIT professor of Economics Robert Solow criticized the DSGE models currently in use:
I do not think that the currently popular DSGE models pass the smell test. They take it for granted that the whole economy can be thought about as if it were a single, consistent person or dynasty carrying out a rationally designed, long-term plan, occasionally disturbed by unexpected shocks, but adapting to them in a rational, consistent way... The protagonists of this idea make a claim to respectability by asserting that it is founded on what we know about microeconomic behavior, but I think that this claim is generally phony. The advocates no doubt believe what they say, but they seem to have stopped sniffing or to have lost their sense of smell altogether.
Commenting on the Congressional session, The Economist asked whether agent-based models might better predict financial crises than DSGE models.
Former Chief Economist and Senior Vice President of the World Bank Paul Romer has criticized the "mathiness" of DSGE models and dismisses the inclusion of "imaginary shocks" in DSGE models that ignore "actions that people take." Romer submits a simplified presentation of real business cycle (RBC) modelling, which, as he states, essentially involves two mathematical expressions: The well known formula of the quantity theory of money, and an identity that defines the growth accounting residual as the difference between growth of output and growth of an index of inputs in production.
Romer assigned to residual the label "phlogiston" while he criticized the lack of consideration given to monetary policy in DSGE analysis.
Joseph Stiglitz finds "staggering" shortcomings in the "fantasy world" the models create and argues that "the failure [of macroeconomics] were the wrong microfoundations, which failed to incorporate key aspects of economic behavior". He suggested the models have failed to incorporate "insights from information economics and behavioral economics" and are "ill-suited for predicting or responding to a financial crisis." Oxford University's John Muellbauer put it this way: "It is as if the information economics revolution, for which George Akerlof, Michael Spence and Joe Stiglitz shared the Nobel Prize in 2001, had not occurred. The combination of assumptions, when coupled with the trivialisation of risk and uncertainty...render money, credit and asset prices largely irrelevant... [The models] typically ignore inconvenient truths." Nobel laureate Paul Krugman asked, "Were there any interesting predictions from DSGE models that were validated by events? If there were, I'm not aware of it."
Austrian economists reject DSGE modelling. Critique of DSGE-style macromodeling is at the core of Austrian theory, where, as opposed to RBC and New Keynesian models where capital is homogeneous capital is heterogeneous and multi-specific and, therefore, production functions for the multi-specific capital are simply discovered over time. Lawrence H. White concludes that present-day mainstream macroeconomics is dominated by Walrasian DSGE models, with restrictions added to generate Keynesian properties:
Mises consistently attributed the boom-initiating shock to unexpectedly expansive policy by a central bank trying to lower the market interest rate. Hayek added two alternate scenarios. [One is where] fresh producer-optimism about investment raises the demand for loanable funds, and thus raises the natural rate of interest, but the central bank deliberately prevents the market rate from rising by expanding credit. [Another is where,] in response to the same kind of increase the demand for loanable funds, but without central bank impetus, the commercial banking system by itself expands credit more than is sustainable.
Hayek had criticized Wicksell for the confusion of thinking that establishing a rate of interest consistent with intertemporal equilibrium also implies a constant price level. Hayek posited that intertemporal equilibrium requires not a natural rate but the "neutrality of money," in the sense that money does not "distort" (influence) relative prices.
Post-Keynesians reject the notions of macro-modelling typified by DSGE. They consider such attempts as "a chimera of authority," pointing to the 2003 statement by Lucas, the pioneer of modern DSGE modelling:
Macroeconomics in [its] original sense [of preventing the recurrence of economic disasters] has succeeded. Its central problem of depression prevention has been solved, for all practical purposes, and has in fact been solved for many decades.
A basic Post Keynesian presumption, which Modern Monetary Theory proponents share, and which is central to Keynesian analysis, is that the future is unknowable and so, at best, we can make guesses about it that would be based broadly on habit, custom, gut-feeling, etc. In DSGE modeling, the central equation for consumption supposedly provides a way in which the consumer links decisions to consume now with decisions to consume later and thus achieves maximum utility in each period. Our marginal Utility from consumption today must equal our marginal utility from consumption in the future, with a weighting parameter that refers to the valuation that we place on the future relative to today. And since the consumer is supposed to always the equation for consumption, this means that all of us do it individually, if this approach is to reflect the DSGE microfoundational notions of consumption. However, post-Keynesians state that: no consumer is the same with another in terms of random shocks and uncertainty of income (since some consumers will spend every cent of any extra income they receive while others, typically higher-income earners, spend comparatively little of any extra income); no consumer is the same with another in terms of access to credit; not every consumer really considers what they will be doing at the end of their life in any coherent way, so there is no concept of a "permanent lifetime income", which is central to DSGE models; and, therefore, trying to "aggregate" all these differences into one, single "representative agent" is impossible. These assumptions are similar to the assumptions made in the so-called Ricardian equivalence, whereby consumers are assumed to be forward looking and to internalize the government's budget constraints when making consumption decisions, and therefore taking decisions on the basis of practically perfect evaluations of available information.
Extrinsic unpredictability, post-Keynesians state, has "dramatic consequences" for the standard, macroeconomic, forecasting, DSGE models used by governments and other institutions around the world. The mathematical basis of every DSGE model fails when distributions shift, since general-equilibrium theories rely heavily on ceteris paribus assumptions. They point to the Bank of England's explicit admission that none of the models they used and evaluated coped well during the 2007–2008 financial crisis, which, for the Bank, "underscores the role that large structural breaks can have in contributing to forecast failure, even if they turn out to be temporary."
Christian Mueller points out that the fact that DSGE models evolve (see next section) constitutes a contradiction of the modelling approach in its own right and, ultimately, makes DSGE models subject to the Lucas critique. This contradiction arises because the economic agents in the DSGE models fail to account for the fact that the very models on the basis of which they form expectations evolve due to progress in economic research. While the evolution of DSGE models as such is predictable the direction of this evolution is not. In effect, Lucas' notion of the systematic instability of economic models carries over to DSGE models proving that they are not solving one of the key problems they are thought to be overcoming.
Evolution of viewpoints
Federal Reserve Bank of Minneapolis president Narayana Kocherlakota acknowledges that DSGE models were "not very useful" for analyzing the financial crisis of 2007-2010 but argues that the applicability of these models is "improving," and claims that there is growing consensus among macroeconomists that DSGE models need to incorporate both "price stickiness and financial market frictions." Despite his criticism of DSGE modelling, he states that modern models are useful:
In the early 2000s, ...[the] problem of fit disappeared for modern macro models with sticky prices. Using novel Bayesian estimation methods, Frank Smets and Raf Wouters demonstrated that a sufficiently rich New Keynesian model could fit European data well. Their finding, along with similar work by other economists, has led to widespread adoption of New Keynesian models for policy analysis and forecasting by central banks around the world.
Still, Kocherlakota observes that in "terms of fiscal policy (especially short-term fiscal policy), modern macro-modeling seems to have had little impact. ... [M]ost, if not all, of the motivation for the fiscal stimulus was based largely on the long-discarded models of the 1960s and 1970s.
In 2010, Rochelle M. Edge, of the Federal Reserve System Board of Directors, contested that the work of Smets & Wouters has "led DSGE models to be taken more seriously by central bankers around the world" so that "DSGE models are now quite prominent tools for macroeconomic analysis at many policy institutions, with forecasting being one of the key areas where these models are used, in conjunction with other forecasting methods."
University of Minnesota professor of economics V.V. Chari has pointed out that state-of-the-art DSGE models are more sophisticated than their critics suppose:
The models have all kinds of heterogeneity in behavior and decisions... people's objectives differ, they differ by age, by information, by the history of their past experiences.
Chari also argued that current DSGE models frequently incorporate frictional unemployment, financial market imperfections, and sticky prices and wages, and therefore imply that the macroeconomy behaves in a suboptimal way which monetary and fiscal policy may be able to improve. Columbia University's Michael Woodford concedes that policies considered by DSGE models might not be Pareto optimal and they may as well not satisfy some other social welfare criterion. Nonetheless, in replying to Mankiw, Woodford argues that the DSGE models commonly used by central banks today and strongly influencing policy makers like Ben Bernanke, do not provide an analysis so different from traditional Keynesian analysis:
It is true that the modeling efforts of many policy institutions can reasonably be seen as an evolutionary development within the macroeconomic modeling program of the postwar Keynesians; thus if one expected, with the early New Classicals, that adoption of the new tools would require building anew from the ground up, one might conclude that the new tools have not been put to use. But in fact they have been put to use, only not with such radical consequences as had once been expected.
See also
Footnotes
References
Sources
Further reading
Software
DYNARE, free software for handling economic models, including DSGE
IRIS, free, open-source toolbox for macroeconomic modeling and forecasting
External links
Society for Economic Dynamics - Website of the Society for Economic Dynamics, dedicated to advances in DSGE modeling.
DSGE-NET, an "international network for DSGE modeling, monetary and fiscal policy"
General equilibrium theory
New classical macroeconomics
New Keynesian economics | 0.781376 | 0.989769 | 0.773382 |
Jöns Jacob Berzelius | Baron Jöns Jacob Berzelius ( (20 August 1779 – 7 August 1848) was a Swedish chemist. In general, he is considered the last person to know the whole field of chemistry. Berzelius is considered, along with Robert Boyle, John Dalton, and Antoine Lavoisier, to be one of the founders of modern chemistry. Berzelius became a member of the Royal Swedish Academy of Sciences in 1808 and served from 1818 as its principal functionary. He is known in Sweden as the "Father of Swedish Chemistry". During his lifetime he did not customarily use his first given name, and was universally known simply as Jacob Berzelius.
Although Berzelius began his career as a physician, his enduring contributions were in the fields of electrochemistry, chemical bonding and stoichiometry. In particular, he is noted for his determination of atomic weights and his experiments that led to a more complete understanding of the principles of stoichiometry, which is the branch of chemistry pertaining to the quantitative relationships between elements in chemical compounds and chemical reactions and that these occur in definite proportions. This understanding came to be known as the "Law of Constant Proportions".
Berzelius was a strict empiricist, expecting that any new theory must be consistent with the sum of contemporary chemical knowledge. He developed improved methods of chemical analysis, which were required to develop the basic data in support of his work on stoichiometry. He investigated isomerism, allotropy, and catalysis, phenomena that owe their names to him. Berzelius was among the first to articulate the differences between inorganic compounds and organic compounds. Among the many minerals and elements he studied, he is credited with discovering cerium and selenium, and with being the first to isolate silicon and thorium. Following on his interest in mineralogy, Berzelius synthesized and chemically characterized new compounds of these and other elements.
Berzelius demonstrated the use of an electrochemical cell to decompose certain chemical compounds into pairs of electrically opposite constituents. From this research, he articulated a theory that came to be known as electrochemical dualism, contending that chemical compounds are oxide salts, bonded together by electrostatic interactions. This theory, while useful in some contexts, came to be seen as insufficient. Berzelius's work with atomic weights and his theory of electrochemical dualism led to his development of a modern system of chemical formula notation that showed the composition of any compound both qualitatively and quantitatively. His system abbreviated the Latin names of the elements with one or two letters and applied superscripts to designate the number of atoms of each element present in the compound. Later, chemists changed to use of subscripts rather than superscripts.
Biography
Early life and education
Berzelius was born in the parish of Väversunda in Östergötland in Sweden. His father Samuel Berzelius was a school teacher in the nearby city of Linköping, and his mother Elizabeth Dorothea Sjösteen was a homemaker. His parents were both from families of church pastors. Berzelius lost both his parents at an early age. His father died in 1779, after which his mother married a pastor named Anders Eckmarck, who gave Berzelius a basic education including knowledge of the natural world. Following the death of his mother in 1787, relatives in Linköping took care of him. There he attended the school today known as Katedralskolan. As a teenager, he took a position as a tutor at a farm near his home, during which time he became interested in collecting flowers and insects and their classification.
Berzelius later enrolled as a medical student at Uppsala University, from 1796 to 1801. Anders Gustaf Ekeberg, the discoverer of tantalum, taught him chemistry during this time. He worked as an apprentice in a pharmacy, during which time he also learned practical matters in the laboratory such as glassblowing. On his own during his studies, he successfully repeated the experimentation conducted by Swedish chemist Carl William Scheele which led to Scheele's discovery of oxygen. He also worked with a physician in the Medevi mineral springs. During this time, he conducted an analysis of the water from this source. Additionally as part of his studies, in 1800, Berzelius learned about Alessandro Volta's electric pile, the first device that could provide a constant electric current (i.e., the first battery). He constructed a similar battery for himself, consisting of alternating disks of copper and zinc, and this was his initial work in the field of electrochemistry.
As thesis research in his medical studies, he examined the influence of galvanic current on several diseases. This line of experimentation produced no clear-cut evidence for such influence. Berzelius graduated as a medical doctor in 1802. He worked as a physician near Stockholm until the chemist and mine-owner Wilhelm Hisinger recognized his abilities as an analytical chemist and provided him with a laboratory.
Academic career
In 1807, Berzelius was appointed professor in chemistry and pharmacy at the Karolinska Institute. Between 1808 and 1836, Berzelius worked together with Anna Sundström, who acted as his assistant and was the first female chemist in Sweden.
In 1808, he was elected a member of the Royal Swedish Academy of Sciences. At this time, the Academy had been stagnating for several years, since the era of romanticism in Sweden had led to less interest in the sciences. In 1818, Berzelius was elected the Academy's secretary and held the post until 1848. During Berzelius' tenure, he is credited with revitalising the Academy and bringing it into a second golden era (the first being the astronomer Pehr Wilhelm Wargentin's period as secretary from 1749 to 1783). He was elected a Foreign Honorary Member of the American Academy of Arts and Sciences in 1822. In 1827, he became correspondent of the Royal Institute of the Netherlands, and in 1830 associate member.
In 1837, he was elected a member of the Swedish Academy, on chair number 5.
Temperance movement
Berzelius was active in the temperance movement. Along with , , Anders Retzius, Samuel Owen, George Scott, and others, he was one of the founders of the Svenska nykterhetssällskapet (the Swedish Temperance Society) in 1837 and its first chairman. Berzelius wrote the foreword to one of works on the topic, of which 50,000 copies were printed.
Later life
Through much of his life, Berzelius suffered various medical ailments. These included recurrent migraine headaches and then later on he suffered from gout. He also had episodes of depression.
In 1818, Berzelius had a nervous breakdown, said to be due to the stress of his work. The medical advice he received was to travel and take vacation. However, during this time, Berzelius traveled to France to work in the chemical laboratories of Claude Louis Berthollet.
In 1835, at the age of 56, he married Elizabeth Poppius, the 24-year-old daughter of a Swedish cabinet minister.
He died on 7 August 1848 at his home in Stockholm, where he had lived since 1806. He is buried in the Solna Cemetery.
Achievements
Law of definite proportions
Soon after arriving in Stockholm, Berzelius wrote a chemistry textbook for his medical students, Lärbok i Kemien, which was his first significant scientific publication. He had conducted experimentation, in preparation for writing this textbook, on the compositions of inorganic compounds, which was his earliest work on definite proportions. In 1813–4, he submitted a lengthy essay (published in five separate articles) on the proportions of elements in compounds. The essay commenced with a general description, introduced his new symbolism, and examined all the known elements. The essay ended with a table of the "specific weights" (relative atomic masses) of the elements, where oxygen was set to 100, and a selection of compounds written in his new formalism. This work provided evidence in favour of the atomic theory proposed by John Dalton: that inorganic chemical compounds are composed of atoms of different elements combined in whole number amounts. In discovering that atomic weights are not integer multiples of the atomic weight of hydrogen, Berzelius also disproved Prout's hypothesis that elements are built up from atoms of hydrogen. Berzelius's last revised version of his atomic weight tables was first published in a German translation of his Textbook of Chemistry in 1826.
Chemical notation
In order to aid his experiments, he developed a system of chemical notation in which the elements composing any particular chemical compound were given simple written labels—such as O for oxygen, or Fe for iron—with their proportions in the chemical compound denoted by numbers. Berzelius thus invented the system of chemical notation still used today, the main difference being that instead of the subscript numbers used today (e.g., H2O or Fe2O3), Berzelius used superscripts (H2O or Fe2O3).
Discovery of elements
Berzelius is credited with discovering the chemical elements cerium and selenium and with being the first to isolate silicon, thorium, titanium and zirconium. Berzelius discovered cerium in 1803
and selenium in 1817.
Berzelius also discovered how to isolate silicon in 1824,
and thorium in 1824.
Students working in Berzelius's laboratory also discovered lithium, lanthanum, and vanadium.
Berzelius discovered amorphous silicon by repeating an experiment performed by Gay-Lussac and Thénard in which they reacted silicon tetrafluoride with potassium metal which produced very impure silicon. In a variation of this experiment Berzelius heated potassium fluorosilicate with potassium. It produced potassium silicide which he then stirred with water to produce relatively pure silicon powder. Berzelius recognized this powder as the new element of silicon, which he called silicium, a name proposed earlier by Davy.
Berzelius was the first to isolate zirconium in 1824, but pure zirconium was not produced until 1925, by Anton Eduard van Arkel and Jan Hendrik de Boer.
New chemical terms
Berzelius is credited with originating the chemical terms "catalysis", "polymer," "isomer," "protein" and "allotrope," although his original definitions in some cases differ significantly from modern usage. As an example, he coined the term "polymer" in 1833 to describe organic compounds which shared identical empirical formulas but which differed in overall molecular weight, the larger of the compounds being described as "polymers" of the smallest. At this time the concept of chemical structure had not yet been developed so that he considered only the numbers of atoms of each element. In this way, he viewed for example glucose (C6H12O6) as a polymer of formaldehyde (CH2O), even though we now know that glucose is not a polymer of the monomer formaldehyde.
Biology and organic chemistry
Berzelius was the first person to make the distinction between organic compounds (those containing carbon), and inorganic compounds. In particular, he advised Gerardus Johannes Mulder in his elemental analyses of organic compounds such as coffee, tea, and various proteins. The term protein itself was coined by Berzelius, in 1838, after Mulder observed that all proteins seemed to have the same empirical formula and came to the erroneous conclusion that they might be composed of a single type of very large molecule. The term is derived from the Greek, meaning "of the first rank", and Berzelius proposed the name because proteins were so fundamental to living organisms.
In 1808, Berzelius discovered that lactic acid occurs in muscle tissue, not just in milk.
The term biliverdin was coined by Berzelius in 1840, although he preferred "bilifulvin" (yellow/red) over "bilirubin" (red).
Vitalism
Berzelius stated in 1810 that living things work by some mysterious "vital force", a hypothesis called vitalism. Vitalism had first been proposed by prior researchers, although Berzelius contended that compounds could be distinguished by whether they required any organisms in their synthesis (organic compounds) or whether they did not (inorganic compounds). However, in 1828, Friedrich Wöhler accidentally obtained urea, an organic compound, by heating ammonium cyanate. This showed that an organic compound such as urea could be prepared synthetically and not exclusively by living organisms. Berzelius corresponded with Wöhler on the urea synthesis findings. However, the notion of vitalism continued to persist, until further work on abiotic synthesis of organic compounds provided substantial evidence against vitalism.
Works
Lärbok i kemien (in Swedish). Stockholm, Nordström, 1808-1830.
Tabell, som utvisar vigten af större delen vid den oorganiska Kemiens studium märkvärdiga enkla och sammansatta kroppars atomer, jemte deras sammansättning, räknad i procent (in Swedish). Stockholm : H.A. Nordström, 1818.
Relations with other scientists
Berzelius was a prolific correspondent with leading scientists of his time, such as Gerardus Johannes Mulder, Claude Louis Berthollet, Humphry Davy, Friedrich Wöhler, Eilhard Mitscherlich and Christian Friedrich Schönbein.
In 1812, Berzelius traveled to London, England, including Greenwich to meet with prominent British scientists of the time. These included Humphry Davy, chemist William Wollaston, physician-scientist Thomas Young, astronomer William Herschel, chemist Smithson Tennant, and inventor James Watt, among others. Berzelius also visited Davy's laboratory. After his visit to Davy's laboratory, Berzelius remarked, "A tidy laboratory is a sign of a lazy chemist."
Humphry Davy in 1810 proposed that chlorine is an element. Berzelius rejected this claim because of his belief that all acids were based on oxygen. Since chlorine forms a strong acid (muriatic acid, modern HCl), chlorine must contain oxygen and thus cannot be an element. However, in 1812, Bernard Courtois proved that iodine is an element. Then in 1816 Joseph-Louis Gay-Lussac demonstrated that prussic acid (hydrogen cyanide) contains only hydrogen, carbon, and nitrogen, and no oxygen. These findings persuaded Berzelius that not all acids contain oxygen, and that Davy and Gay-Lussac were correct: chlorine and iodine are indeed elements.
Honors and recognition
In 1818 Berzelius was ennobled by King Carl XIV Johan. In 1835, he received the title of friherre.
In 1820 he was elected a member of the American Philosophical Society.
The Royal Society of London gave Berzelius the Copley Medal in 1836 with the citation "For his systematic application of the doctrine of definite proportions to the analysis of mineral bodies, as contained in his Nouveau Systeme de Mineralogie, and in other of his works."
In 1840, Berzelius was named Knight of the Order of Leopold. In 1842, he received the honor Pour le Mérite for Sciences and Arts.
The mineral berzelianite, a copper selenide, was discovered in 1850 and named after him by James Dwight Dana.
In 1852, Stockholm, Sweden, built a public park and statue, both to honor Berzelius. Berzeliusskolan, a school situated next to his alma mater, Katedralskolan, is named for him. In 1890, a fairly prominent street in Gothenburg was named Berzeliigatan (Berzelii street) in his honour.
In 1898, the Swedish Academy of Sciences opened the Berzelius Museum in honor of Berzelius. The holdings of the museum included many items from his laboratory. The museum was opened on the occasion of fiftieth anniversary of Berzelius's death. Invitees at the ceremony marking the occasion included scientific dignitaries from eleven European nations and the United States, many of whom gave formal addresses in honor of Berzelius. The Berzelius Museum was later moved to the observatory that is part of the Swedish Academy of Sciences.
In 1939 his portrait appeared on a series of postage stamps commemorating the bicentenary of the founding of the Swedish Academy of Sciences. In addition to Sweden, Grenada likewise honored him.
The Berzelius secret society at Yale University is named in his honor.
See also
Berzelius beaker
References
Further reading
Holmberg, Arne (1933) Bibliografi över J. J. Berzelius. 2 parts in 5 vol. Stockholm: Kungl. Svenska Vetenskapsakademien, 1933–67. 1. del och suppl. 1–2. Tryckta arbeten av och om Berzelius. 2. del och suppl. Manuskript
Jorpes, J. Erik (1966) Jac. Berzelius – his life and work; translated from the Swedish manuscript by Barbara Steele. Stockholm: Almqvist & Wiksell, 1966. (Reissued by University of California Press, Berkeley, 1970 )
Partington, J. R. (1964) History of Chemistry; vol. 4. London: Macmillan; pp. 142–77
External links
List of works by Berzelius (301 items as of access date 2011-12-29)
Online works at Project Runeberg
Online correspondence between Berzelius and Sir Humphry Davy on Wikisource
Online works on Gallica (27 items as of access date 2011-12-29)
Nordisk familjebok (1905), band 3, s. 90–96
Digital edition of "Lehrbuch der Chemie" 1823/1824 by the University and State Library Düsseldorf
Digital edition of "Das saidschitzer Bitterwasser : chemisch untersucht" 1840 by the University and State Library Düsseldorf
Digital edition of "Aus Jac. Berzelius' und Gustav Magnus' Briefwechsel in den Jahren 1828–1847" 1900 by the University and State Library Düsseldorf
1779 births
1848 deaths
People from Vadstena Municipality
Swedish chemists
People involved with the periodic table
Uppsala University alumni
Academic staff of the Karolinska Institute
Members of the French Academy of Sciences
Members of the Royal Netherlands Academy of Arts and Sciences
Members of the Royal Swedish Academy of Sciences
Members of the Swedish Academy
Recipients of the Pour le Mérite (civil class)
Discoverers of chemical elements
Recipients of the Copley Medal
Swedish nobility
Foreign members of the Royal Society
Fellows of the American Academy of Arts and Sciences
19th-century Swedish scientists
19th-century Swedish chemists
Selenium
Thorium
Cerium
Honorary Fellows of the Royal Society of Edinburgh
Vitalists
Rare earth scientists
Manchester Literary and Philosophical Society
Swedish temperance activists
Members of the Göttingen Academy of Sciences and Humanities
Solid state chemists | 0.778678 | 0.993151 | 0.773345 |
Protein engineering | Protein engineering is the process of developing useful or valuable proteins through the design and production of unnatural polypeptides, often by altering amino acid sequences found in nature. It is a young discipline, with much research taking place into the understanding of protein folding and recognition for protein design principles. It has been used to improve the function of many enzymes for industrial catalysis. It is also a product and services market, with an estimated value of $168 billion by 2017.
There are two general strategies for protein engineering: rational protein design and directed evolution. These methods are not mutually exclusive; researchers will often apply both. In the future, more detailed knowledge of protein structure and function, and advances in high-throughput screening, may greatly expand the abilities of protein engineering. Eventually, even unnatural amino acids may be included, via newer methods, such as expanded genetic code, that allow encoding novel amino acids in genetic code.
The applications in numerous fields, including medicine and industrial bioprocessing, are vast and numerous.
Approaches
Rational design
In rational protein design, a scientist uses detailed knowledge of the structure and function of a protein to make desired changes. In general, this has the advantage of being inexpensive and technically easy, since site-directed mutagenesis methods are well-developed. However, its major drawback is that detailed structural knowledge of a protein is often unavailable, and, even when available, it can be very difficult to predict the effects of various mutations since structural information most often provide a static picture of a protein structure. However, programs such as Folding@home and Foldit have utilized crowdsourcing techniques in order to gain insight into the folding motifs of proteins.
Computational protein design algorithms seek to identify novel amino acid sequences that are low in energy when folded to the pre-specified target structure. While the sequence-conformation space that needs to be searched is large, the most challenging requirement for computational protein design is a fast, yet accurate, energy function that can distinguish optimal sequences from similar suboptimal ones.
Multiple sequence alignment
Without structural information about a protein, sequence analysis is often useful in elucidating information about the protein. These techniques involve alignment of target protein sequences with other related protein sequences. This alignment can show which amino acids are conserved between species and are important for the function of the protein. These analyses can help to identify hot spot amino acids that can serve as the target sites for mutations. Multiple sequence alignment utilizes data bases such as PREFAB, SABMARK, OXBENCH, IRMBASE, and BALIBASE in order to cross reference target protein sequences with known sequences. Multiple sequence alignment techniques are listed below.
This method begins by performing pair wise alignment of sequences using k-tuple or Needleman–Wunsch methods. These methods calculate a matrix that depicts the pair wise similarity among the sequence pairs. Similarity scores are then transformed into distance scores that are used to produce a guide tree using the neighbor joining method. This guide tree is then employed to yield a multiple sequence alignment.
Clustal omega
This method is capable of aligning up to 190,000 sequences by utilizing the k-tuple method. Next sequences are clustered using the mBed and k-means methods. A guide tree is then constructed using the UPGMA method that is used by the HH align package. This guide tree is used to generate multiple sequence alignments.
MAFFT
This method utilizes fast Fourier transform (FFT) that converts amino acid sequences into a sequence composed of volume and polarity values for each amino acid residue. This new sequence is used to find homologous regions.
K-Align
This method utilizes the Wu-Manber approximate string matching algorithm to generate multiple sequence alignments.
Multiple sequence comparison by log expectation (MUSCLE)
This method utilizes Kmer and Kimura distances to generate multiple sequence alignments.
T-Coffee
This method utilizes tree based consistency objective functions for alignment evolution. This method has been shown to be 5–10% more accurate than Clustal W.
Coevolutionary analysis
Coevolutionary analysis is also known as correlated mutation, covariation, or co-substitution. This type of rational design involves reciprocal evolutionary changes at evolutionarily interacting loci. Generally this method begins with the generation of a curated multiple sequence alignments for the target sequence. This alignment is then subjected to manual refinement that involves removal of highly gapped sequences, as well as sequences with low sequence identity. This step increases the quality of the alignment. Next, the manually processed alignment is utilized for further coevolutionary measurements using distinct correlated mutation algorithms. These algorithms result in a coevolution scoring matrix. This matrix is filtered by applying various significance tests to extract significant coevolution values and wipe out background noise. Coevolutionary measurements are further evaluated to assess their performance and stringency. Finally, the results from this coevolutionary analysis are validated experimentally.
Structural prediction
De novo generation of protein benefits from knowledge of existing protein structures. This knowledge of existing protein structure assists with the prediction of new protein structures. Methods for protein structure prediction fall under one of the four following classes: ab initio, fragment based methods, homology modeling, and protein threading.
Ab initio
These methods involve free modeling without using any structural information about the template. Ab initio methods are aimed at prediction of the native structures of proteins corresponding to the global minimum of its free energy. some examples of ab initio methods are AMBER, GROMOS, GROMACS, CHARMM, OPLS, and ENCEPP12. General steps for ab initio methods begin with the geometric representation of the protein of interest. Next, a potential energy function model for the protein is developed. This model can be created using either molecular mechanics potentials or protein structure derived potential functions. Following the development of a potential model, energy search techniques including molecular dynamic simulations, Monte Carlo simulations and genetic algorithms are applied to the protein.
Fragment based
These methods use database information regarding structures to match homologous structures to the created protein sequences. These homologous structures are assembled to give compact structures using scoring and optimization procedures, with the goal of achieving the lowest potential energy score. Webservers for fragment information are I-TASSER, ROSETTA, ROSETTA @ home, FRAGFOLD, CABS fold, PROFESY, CREF, QUARK, UNDERTAKER, HMM, and ANGLOR.
Homology modeling
These methods are based upon the homology of proteins. These methods are also known as comparative modeling. The first step in homology modeling is generally the identification of template sequences of known structure which are homologous to the query sequence. Next the query sequence is aligned to the template sequence. Following the alignment, the structurally conserved regions are modeled using the template structure. This is followed by the modeling of side chains and loops that are distinct from the template. Finally the modeled structure undergoes refinement and assessment of quality. Servers that are available for homology modeling data are listed here: SWISS MODEL, MODELLER, ReformAlign, PyMOD, TIP-STRUCTFAST, COMPASS, 3d-PSSM, SAMT02, SAMT99, HHPRED, FAGUE, 3D-JIGSAW, META-PP, ROSETTA, and I-TASSER.
Protein threading
Protein threading can be used when a reliable homologue for the query sequence cannot be found. This method begins by obtaining a query sequence and a library of template structures. Next, the query sequence is threaded over known template structures. These candidate models are scored using scoring functions. These are scored based upon potential energy models of both query and template sequence. The match with the lowest potential energy model is then selected. Methods and servers for retrieving threading data and performing calculations are listed here: GenTHREADER, pGenTHREADER, pDomTHREADER, ORFEUS, PROSPECT, BioShell-Threading, FFASO3, RaptorX, HHPred, LOOPP server, Sparks-X, SEGMER, THREADER2, ESYPRED3D, LIBRA, TOPITS, RAPTOR, COTH, MUSTER.
For more information on rational design see site-directed mutagenesis.
Multivalent binding
Multivalent binding can be used to increase the binding specificity and affinity through avidity effects. Having multiple binding domains in a single biomolecule or complex increases the likelihood of other interactions to occur via individual binding events. Avidity or effective affinity can be much higher than the sum of the individual affinities providing a cost and time-effective tool for targeted binding.
Multivalent proteins
Multivalent proteins are relatively easy to produce by post-translational modifications or multiplying the protein-coding DNA sequence. The main advantage of multivalent and multispecific proteins is that they can increase the effective affinity for a target of a known protein. In the case of an inhomogeneous target using a combination of proteins resulting in multispecific binding can increase specificity, which has high applicability in protein therapeutics.
The most common example for multivalent binding are the antibodies, and there is extensive research for bispecific antibodies. Applications of bispecific antibodies cover a broad spectrum that includes diagnosis, imaging, prophylaxis, and therapy.
Directed evolution
In directed evolution, random mutagenesis, e.g. by error-prone PCR or sequence saturation mutagenesis, is applied to a protein, and a selection regime is used to select variants having desired traits. Further rounds of mutation and selection are then applied. This method mimics natural evolution and, in general, produces superior results to rational design. An added process, termed DNA shuffling, mixes and matches pieces of successful variants to produce better results. Such processes mimic the recombination that occurs naturally during sexual reproduction. Advantages of directed evolution are that it requires no prior structural knowledge of a protein, nor is it necessary to be able to predict what effect a given mutation will have. Indeed, the results of directed evolution experiments are often surprising in that desired changes are often caused by mutations that were not expected to have some effect. The drawback is that they require high-throughput screening, which is not feasible for all proteins. Large amounts of recombinant DNA must be mutated and the products screened for desired traits. The large number of variants often requires expensive robotic equipment to automate the process. Further, not all desired activities can be screened for easily.
Natural Darwinian evolution can be effectively imitated in the lab toward tailoring protein properties for diverse applications, including catalysis. Many experimental technologies exist to produce large and diverse protein libraries and for screening or selecting folded, functional variants. Folded proteins arise surprisingly frequently in random sequence space, an occurrence exploitable in evolving selective binders and catalysts. While more conservative than direct selection from deep sequence space, redesign of existing proteins by random mutagenesis and selection/screening is a particularly robust method for optimizing or altering extant properties. It also represents an excellent starting point for achieving more ambitious engineering goals. Allying experimental evolution with modern computational methods is likely the broadest, most fruitful strategy for generating functional macromolecules unknown to nature.
The main challenges of designing high quality mutant libraries have shown significant progress in the recent past. This progress has been in the form of better descriptions of the effects of mutational loads on protein traits. Also computational approaches have showed large advances in the innumerably large sequence space to more manageable screenable sizes, thus creating smart libraries of mutants. Library size has also been reduced to more screenable sizes by the identification of key beneficial residues using algorithms for systematic recombination. Finally a significant step forward toward efficient reengineering of enzymes has been made with the development of more accurate statistical models and algorithms quantifying and predicting coupled mutational effects on protein functions.
Generally, directed evolution may be summarized as an iterative two step process which involves generation of protein mutant libraries, and high throughput screening processes to select for variants with improved traits. This technique does not require prior knowledge of the protein structure and function relationship. Directed evolution utilizes random or focused mutagenesis to generate libraries of mutant proteins. Random mutations can be introduced using either error prone PCR, or site saturation mutagenesis. Mutants may also be generated using recombination of multiple homologous genes. Nature has evolved a limited number of beneficial sequences. Directed evolution makes it possible to identify undiscovered protein sequences which have novel functions. This ability is contingent on the proteins ability to tolerant amino acid residue substitutions without compromising folding or stability.
Directed evolution methods can be broadly categorized into two strategies, asexual and sexual methods.
Asexual methods
Asexual methods do not generate any cross links between parental genes. Single genes are used to create mutant libraries using various mutagenic techniques. These asexual methods can produce either random or focused mutagenesis.
Random mutagenesis
Random mutagenic methods produce mutations at random throughout the gene of interest. Random mutagenesis can introduce the following types of mutations: transitions, transversions, insertions, deletions, inversion, missense, and nonsense. Examples of methods for producing random mutagenesis are below.
Error prone PCR
Error prone PCR utilizes the fact that Taq DNA polymerase lacks 3' to 5' exonuclease activity. This results in an error rate of 0.001–0.002% per nucleotide per replication. This method begins with choosing the gene, or the area within a gene, one wishes to mutate. Next, the extent of error required is calculated based upon the type and extent of activity one wishes to generate. This extent of error determines the error prone PCR strategy to be employed. Following PCR, the genes are cloned into a plasmid and introduced to competent cell systems. These cells are then screened for desired traits. Plasmids are then isolated for colonies which show improved traits, and are then used as templates the next round of mutagenesis. Error prone PCR shows biases for certain mutations relative to others. Such as biases for transitions over transversions.
Rates of error in PCR can be increased in the following ways:
Increase concentration of magnesium chloride, which stabilizes non complementary base pairing.
Add manganese chloride to reduce base pair specificity.
Increased and unbalanced addition of dNTPs.
Addition of base analogs like dITP, 8 oxo-dGTP, and dPTP.
Increase concentration of Taq polymerase.
Increase extension time.
Increase cycle time.
Use less accurate Taq polymerase.
Also see polymerase chain reaction for more information.
Rolling circle error-prone PCR
This PCR method is based upon rolling circle amplification, which is modeled from the method that bacteria use to amplify circular DNA. This method results in linear DNA duplexes. These fragments contain tandem repeats of circular DNA called concatamers, which can be transformed into bacterial strains. Mutations are introduced by first cloning the target sequence into an appropriate plasmid. Next, the amplification process begins using random hexamer primers and Φ29 DNA polymerase under error prone rolling circle amplification conditions. Additional conditions to produce error prone rolling circle amplification are 1.5 pM of template DNA, 1.5 mM MnCl2 and a 24 hour reaction time. MnCl2 is added into the reaction mixture to promote random point mutations in the DNA strands. Mutation rates can be increased by increasing the concentration of MnCl2, or by decreasing concentration of the template DNA. Error prone rolling circle amplification is advantageous relative to error prone PCR because of its use of universal random hexamer primers, rather than specific primers. Also the reaction products of this amplification do not need to be treated with ligases or endonucleases. This reaction is isothermal.
Chemical mutagenesis
Chemical mutagenesis involves the use of chemical agents to introduce mutations into genetic sequences. Examples of chemical mutagens follow.
Sodium bisulfate is effective at mutating G/C rich genomic sequences. This is because sodium bisulfate catalyses deamination of unmethylated cytosine to uracil.
Ethyl methane sulfonate alkylates guanidine residues. This alteration causes errors during DNA replication.
Nitrous acid causes transversion by de-amination of adenine and cytosine.
The dual approach to random chemical mutagenesis is an iterative two step process. First it involves the in vivo chemical mutagenesis of the gene of interest via EMS. Next, the treated gene is isolated and cloning into an untreated expression vector in order to prevent mutations in the plasmid backbone. This technique preserves the plasmids genetic properties.
Targeting glycosylases to embedded arrays for mutagenesis (TaGTEAM)
This method has been used to create targeted in vivo mutagenesis in yeast. This method involves the fusion of a 3-methyladenine DNA glycosylase to tetR DNA-binding domain. This has been shown to increase mutation rates by over 800 time in regions of the genome containing tetO sites.
Mutagenesis by random insertion and deletion
This method involves alteration in length of the sequence via simultaneous deletion and insertion of chunks of bases of arbitrary length. This method has been shown to produce proteins with new functionalities via introduction of new restriction sites, specific codons, four base codons for non-natural amino acids.
Transposon based random mutagenesis
Recently many methods for transposon based random mutagenesis have been reported. This methods include, but are not limited to the following: PERMUTE-random circular permutation, random protein truncation, random nucleotide triplet substitution, random domain/tag/multiple amino acid insertion, codon scanning mutagenesis, and multicodon scanning mutagenesis. These aforementioned techniques all require the design of mini-Mu transposons. Thermo scientific manufactures kits for the design of these transposons.
Random mutagenesis methods altering the target DNA length
These methods involve altering gene length via insertion and deletion mutations. An example is the tandem repeat insertion (TRINS) method. This technique results in the generation of tandem repeats of random fragments of the target gene via rolling circle amplification and concurrent incorporation of these repeats into the target gene.
Mutator strains
Mutator strains are bacterial cell lines which are deficient in one or more DNA repair mechanisms. An example of a mutator strand is the E. coli XL1-RED. This subordinate strain of E. coli is deficient in the MutS, MutD, MutT DNA repair pathways. Use of mutator strains is useful at introducing many types of mutation; however, these strains show progressive sickness of culture because of the accumulation of mutations in the strains own genome.
Focused mutagenesis
Focused mutagenic methods produce mutations at predetermined amino acid residues. These techniques require and understanding of the sequence-function relationship for the protein of interest. Understanding of this relationship allows for the identification of residues which are important in stability, stereoselectivity, and catalytic efficiency. Examples of methods that produce focused mutagenesis are below.
Site saturation mutagenesis
Site saturation mutagenesis is a PCR based method used to target amino acids with significant roles in protein function. The two most common techniques for performing this are whole plasmid single PCR, and overlap extension PCR.
Whole plasmid single PCR is also referred to as site directed mutagenesis (SDM). SDM products are subjected to Dpn endonuclease digestion. This digestion results in cleavage of only the parental strand, because the parental strand contains a GmATC which is methylated at N6 of adenine. SDM does not work well for large plasmids of over ten kilobases. Also, this method is only capable of replacing two nucleotides at a time.
Overlap extension PCR requires the use of two pairs of primers. One primer in each set contains a mutation. A first round of PCR using these primer sets is performed and two double stranded DNA duplexes are formed. A second round of PCR is then performed in which these duplexes are denatured and annealed with the primer sets again to produce heteroduplexes, in which each strand has a mutation. Any gaps in these newly formed heteroduplexes are filled with DNA polymerases and further amplified.
Sequence saturation mutagenesis (SeSaM)
Sequence saturation mutagenesis results in the randomization of the target sequence at every nucleotide position. This method begins with the generation of variable length DNA fragments tailed with universal bases via the use of template transferases at the 3' termini. Next, these fragments are extended to full length using a single stranded template. The universal bases are replaced with a random standard base, causing mutations. There are several modified versions of this method such as SeSAM-Tv-II, SeSAM-Tv+, and SeSAM-III.
Single primer reactions in parallel (SPRINP)
This site saturation mutagenesis method involves two separate PCR reaction. The first of which uses only forward primers, while the second reaction uses only reverse primers. This avoids the formation of primer dimer formation.
Mega primed and ligase free focused mutagenesis
This site saturation mutagenic technique begins with one mutagenic oligonucleotide and one universal flanking primer. These two reactants are used for an initial PCR cycle. Products from this first PCR cycle are used as mega primers for the next PCR.
Ω-PCR
This site saturation mutagenic method is based on overlap extension PCR. It is used to introduce mutations at any site in a circular plasmid.
PFunkel-ominchange-OSCARR
This method utilizes user defined site directed mutagenesis at single or multiple sites simultaneously. OSCARR is an acronym for one pot simple methodology for cassette randomization and recombination. This randomization and recombination results in randomization of desired fragments of a protein. Omnichange is a sequence independent, multisite saturation mutagenesis which can saturate up to five independent codons on a gene.
Trimer-dimer mutagenesis
This method removes redundant codons and stop codons.
Cassette mutagenesis
This is a PCR based method. Cassette mutagenesis begins with the synthesis of a DNA cassette containing the gene of interest, which is flanked on either side by restriction sites. The endonuclease which cleaves these restriction sites also cleaves sites in the target plasmid. The DNA cassette and the target plasmid are both treated with endonucleases to cleave these restriction sites and create sticky ends. Next the products from this cleavage are ligated together, resulting in the insertion of the gene into the target plasmid. An alternative form of cassette mutagenesis called combinatorial cassette mutagenesis is used to identify the functions of individual amino acid residues in the protein of interest. Recursive ensemble mutagenesis then utilizes information from previous combinatorial cassette mutagenesis. Codon cassette mutagenesis allows you to insert or replace a single codon at a particular site in double stranded DNA.
Sexual methods
Sexual methods of directed evolution involve in vitro recombination which mimic natural in vivo recombination. Generally these techniques require high sequence homology between parental sequences. These techniques are often used to recombine two different parental genes, and these methods do create cross overs between these genes.
In vitro homologous recombination
Homologous recombination can be categorized as either in vivo or in vitro. In vitro homologous recombination mimics natural in vivo recombination. These in vitro recombination methods require high sequence homology between parental sequences. These techniques exploit the natural diversity in parental genes by recombining them to yield chimeric genes. The resulting chimera show a blend of parental characteristics.
DNA shuffling
This in vitro technique was one of the first techniques in the era of recombination. It begins with the digestion of homologous parental genes into small fragments by DNase1. These small fragments are then purified from undigested parental genes. Purified fragments are then reassembled using primer-less PCR. This PCR involves homologous fragments from different parental genes priming for each other, resulting in chimeric DNA. The chimeric DNA of parental size is then amplified using end terminal primers in regular PCR.
Random priming in vitro recombination (RPR)
This in vitro homologous recombination method begins with the synthesis of many short gene fragments exhibiting point mutations using random sequence primers. These fragments are reassembled to full length parental genes using primer-less PCR. These reassembled sequences are then amplified using PCR and subjected to further selection processes. This method is advantageous relative to DNA shuffling because there is no use of DNase1, thus there is no bias for recombination next to a pyrimidine nucleotide. This method is also advantageous due to its use of synthetic random primers which are uniform in length, and lack biases. Finally this method is independent of the length of DNA template sequence, and requires a small amount of parental DNA.
Truncated metagenomic gene-specific PCR
This method generates chimeric genes directly from metagenomic samples. It begins with isolation of the desired gene by functional screening from metagenomic DNA sample. Next, specific primers are designed and used to amplify the homologous genes from different environmental samples. Finally, chimeric libraries are generated to retrieve the desired functional clones by shuffling these amplified homologous genes.
Staggered extension process (StEP)
This in vitro method is based on template switching to generate chimeric genes. This PCR based method begins with an initial denaturation of the template, followed by annealing of primers and a short extension time. All subsequent cycle generate annealing between the short fragments generated in previous cycles and different parts of the template. These short fragments and the templates anneal together based on sequence complementarity. This process of fragments annealing template DNA is known as template switching. These annealed fragments will then serve as primers for further extension. This method is carried out until the parental length chimeric gene sequence is obtained. Execution of this method only requires flanking primers to begin. There is also no need for Dnase1 enzyme.
Random chimeragenesis on transient templates (RACHITT)
This method has been shown to generate chimeric gene libraries with an average of 14 crossovers per chimeric gene. It begins by aligning fragments from a parental top strand onto the bottom strand of a uracil containing template from a homologous gene. 5' and 3' overhang flaps are cleaved and gaps are filled by the exonuclease and endonuclease activities of Pfu and taq DNA polymerases. The uracil containing template is then removed from the heteroduplex by treatment with a uracil DNA glcosylase, followed by further amplification using PCR. This method is advantageous because it generates chimeras with relatively high crossover frequency. However it is somewhat limited due to the complexity and the need for generation of single stranded DNA and uracil containing single stranded template DNA.
Synthetic shuffling
Shuffling of synthetic degenerate oligonucleotides adds flexibility to shuffling methods, since oligonucleotides containing optimal codons and beneficial mutations can be included.
In vivo Homologous Recombination
Cloning performed in yeast involves PCR dependent reassembly of fragmented expression vectors. These reassembled vectors are then introduced to, and cloned in yeast. Using yeast to clone the vector avoids toxicity and counter-selection that would be introduced by ligation and propagation in E. coli.
Mutagenic organized recombination process by homologous in vivo grouping (MORPHING)
This method introduces mutations into specific regions of genes while leaving other parts intact by utilizing the high frequency of homologous recombination in yeast.
Phage-assisted continuous evolution (PACE)
This method utilizes a bacteriophage with a modified life cycle to transfer evolving genes from host to host. The phage's life cycle is designed in such a way that the transfer is correlated with the activity of interest from the enzyme. This method is advantageous because it requires minimal human intervention for the continuous evolution of the gene.
In vitro non-homologous recombination methods
These methods are based upon the fact that proteins can exhibit similar structural identity while lacking sequence homology.
Exon shuffling
Exon shuffling is the combination of exons from different proteins by recombination events occurring at introns. Orthologous exon shuffling involves combining exons from orthologous genes from different species. Orthologous domain shuffling involves shuffling of entire protein domains from orthologous genes from different species. Paralogous exon shuffling involves shuffling of exon from different genes from the same species. Paralogous domain shuffling involves shuffling of entire protein domains from paralogous proteins from the same species. Functional homolog shuffling involves shuffling of non-homologous domains which are functional related. All of these processes being with amplification of the desired exons from different genes using chimeric synthetic oligonucleotides. This amplification products are then reassembled into full length genes using primer-less PCR. During these PCR cycles the fragments act as templates and primers. This results in chimeric full length genes, which are then subjected to screening.
Incremental truncation for the creation of hybrid enzymes (ITCHY)
Fragments of parental genes are created using controlled digestion by exonuclease III. These fragments are blunted using endonuclease, and are ligated to produce hybrid genes. THIOITCHY is a modified ITCHY technique which utilized nucleotide triphosphate analogs such as α-phosphothioate dNTPs. Incorporation of these nucleotides blocks digestion by exonuclease III. This inhibition of digestion by exonuclease III is called spiking. Spiking can be accomplished by first truncating genes with exonuclease to create fragments with short single stranded overhangs. These fragments then serve as templates for amplification by DNA polymerase in the presence of small amounts of phosphothioate dNTPs. These resulting fragments are then ligated together to form full length genes. Alternatively the intact parental genes can be amplified by PCR in the presence of normal dNTPs and phosphothioate dNTPs. These full length amplification products are then subjected to digestion by an exonuclease. Digestion will continue until the exonuclease encounters an α-pdNTP, resulting in fragments of different length. These fragments are then ligated together to generate chimeric genes.
SCRATCHY
This method generates libraries of hybrid genes inhibiting multiple crossovers by combining DNA shuffling and ITCHY. This method begins with the construction of two independent ITCHY libraries. The first with gene A on the N-terminus. And the other having gene B on the N-terminus. These hybrid gene fragments are separated using either restriction enzyme digestion or PCR with terminus primers via agarose gel electrophoresis. These isolated fragments are then mixed together and further digested using DNase1. Digested fragments are then reassembled by primerless PCR with template switching.
Recombined extension on truncated templates (RETT)
This method generates libraries of hybrid genes by template switching of uni-directionally growing polynucleotides in the presence of single stranded DNA fragments as templates for chimeras. This method begins with the preparation of single stranded DNA fragments by reverse transcription from target mRNA. Gene specific primers are then annealed to the single stranded DNA. These genes are then extended during a PCR cycle. This cycle is followed by template switching and annealing of the short fragments obtained from the earlier primer extension to other single stranded DNA fragments. This process is repeated until full length single stranded DNA is obtained.
Sequence homology-independent protein recombination (SHIPREC)
This method generates recombination between genes with little to no sequence homology. These chimeras are fused via a linker sequence containing several restriction sites. This construct is then digested using DNase1. Fragments are made are made blunt ended using S1 nuclease. These blunt end fragments are put together into a circular sequence by ligation. This circular construct is then linearized using restriction enzymes for which the restriction sites are present in the linker region. This results in a library of chimeric genes in which contribution of genes to 5' and 3' end will be reversed as compared to the starting construct.
Sequence independent site directed chimeragenesis (SISDC)
This method results in a library of genes with multiple crossovers from several parental genes. This method does not require sequence identity among the parental genes. This does require one or two conserved amino acids at every crossover position. It begins with alignment of parental sequences and identification of consensus regions which serve as crossover sites. This is followed by the incorporation of specific tags containing restriction sites followed by the removal of the tags by digestion with Bac1, resulting in genes with cohesive ends. These gene fragments are mixed and ligated in an appropriate order to form chimeric libraries.
Degenerate homo-duplex recombination (DHR)
This method begins with alignment of homologous genes, followed by identification of regions of polymorphism. Next the top strand of the gene is divided into small degenerate oligonucleotides. The bottom strand is also digested into oligonucleotides to serve as scaffolds. These fragments are combined in solution are top strand oligonucleotides are assembled onto bottom strand oligonucleotides. Gaps between these fragments are filled with polymerase and ligated.
Random multi-recombinant PCR (RM-PCR)
This method involves the shuffling of plural DNA fragments without homology, in a single PCR. This results in the reconstruction of complete proteins by assembly of modules encoding different structural units.
User friendly DNA recombination (USERec)
This method begins with the amplification of gene fragments which need to be recombined, using uracil dNTPs. This amplification solution also contains primers, PfuTurbo, and Cx Hotstart DNA polymerase. Amplified products are next incubated with USER enzyme. This enzyme catalyzes the removal of uracil residues from DNA creating single base pair gaps. The USER enzyme treated fragments are mixed and ligated using T4 DNA ligase and subjected to Dpn1 digestion to remove the template DNA. These resulting dingle stranded fragments are subjected to amplification using PCR, and are transformed into E. coli.
Golden Gate shuffling (GGS) recombination
This method allows you to recombine at least 9 different fragments in an acceptor vector by using type 2 restriction enzyme which cuts outside of the restriction sites. It begins with sub cloning of fragments in separate vectors to create Bsa1 flanking sequences on both sides. These vectors are then cleaved using type II restriction enzyme Bsa1, which generates four nucleotide single strand overhangs. Fragments with complementary overhangs are hybridized and ligated using T4 DNA ligase. Finally these constructs are then transformed into E. coli cells, which are screened for expression levels.
Phosphoro thioate-based DNA recombination method (PRTec)
This method can be used to recombine structural elements or entire protein domains. This method is based on phosphorothioate chemistry which allows the specific cleavage of phosphorothiodiester bonds. The first step in the process begins with amplification of fragments that need to be recombined along with the vector backbone. This amplification is accomplished using primers with phosphorothiolated nucleotides at 5' ends. Amplified PCR products are cleaved in an ethanol-iodine solution at high temperatures. Next these fragments are hybridized at room temperature and transformed into E. coli which repair any nicks.
Integron
This system is based upon a natural site specific recombination system in E. coli. This system is called the integron system, and produces natural gene shuffling. This method was used to construct and optimize a functional tryptophan biosynthetic operon in trp-deficient E. coli by delivering individual recombination cassettes or trpA-E genes along with regulatory elements with the integron system.
Y-Ligation based shuffling (YLBS)
This method generates single stranded DNA strands, which encompass a single block sequence either at the 5' or 3' end, complementary sequences in a stem loop region, and a D branch region serving as a primer binding site for PCR. Equivalent amounts of both 5' and 3' half strands are mixed and formed a hybrid due to the complementarity in the stem region. Hybrids with free phosphorylated 5' end in 3' half strands are then ligated with free 3' ends in 5' half strands using T4 DNA ligase in the presence of 0.1 mM ATP. Ligated products are then amplified by two types of PCR to generate pre 5' half and pre 3' half PCR products. These PCR product are converted to single strands via avidin-biotin binding to the 5' end of the primes containing stem sequences that were biotin labeled. Next, biotinylated 5' half strands and non-biotinylated 3' half strands are used as 5' and 3' half strands for the next Y-ligation cycle.
Semi-rational design
Semi-rational design uses information about a proteins sequence, structure and function, in tandem with predictive algorithms. Together these are used to identify target amino acid residues which are most likely to influence protein function. Mutations of these key amino acid residues create libraries of mutant proteins that are more likely to have enhanced properties.
Advances in semi-rational enzyme engineering and de novo enzyme design provide researchers with powerful and effective new strategies to manipulate biocatalysts. Integration of sequence and structure based approaches in library design has proven to be a great guide for enzyme redesign. Generally, current computational de novo and redesign methods do not compare to evolved variants in catalytic performance. Although experimental optimization may be produced using directed evolution, further improvements in the accuracy of structure predictions and greater catalytic ability will be achieved with improvements in design algorithms. Further functional enhancements may be included in future simulations by integrating protein dynamics.
Biochemical and biophysical studies, along with fine-tuning of predictive frameworks will be useful to experimentally evaluate the functional significance of individual design features. Better understanding of these functional contributions will then give feedback for the improvement of future designs.
Directed evolution will likely not be replaced as the method of choice for protein engineering, although computational protein design has fundamentally changed the way protein engineering can manipulate bio-macromolecules. Smaller, more focused and functionally-rich libraries may be generated by using in methods which incorporate predictive frameworks for hypothesis-driven protein engineering. New design strategies and technical advances have begun a departure from traditional protocols, such as directed evolution, which represents the most effective strategy for identifying top-performing candidates in focused libraries. Whole-gene library synthesis is replacing shuffling and mutagenesis protocols for library preparation. Also highly specific low throughput screening assays are increasingly applied in place of monumental screening and selection efforts of millions of candidates. Together, these developments are poised to take protein engineering beyond directed evolution and towards practical, more efficient strategies for tailoring biocatalysts.
Screening and selection techniques
Once a protein has undergone directed evolution, ration design or semi-ration design, the libraries of mutant proteins must be screened to determine which mutants show enhanced properties. Phage display methods are one option for screening proteins. This method involves the fusion of genes encoding the variant polypeptides with phage coat protein genes. Protein variants expressed on phage surfaces are selected by binding with immobilized targets in vitro. Phages with selected protein variants are then amplified in bacteria, followed by the identification of positive clones by enzyme linked immunosorbent assay. These selected phages are then subjected to DNA sequencing.
Cell surface display systems can also be utilized to screen mutant polypeptide libraries. The library mutant genes are incorporated into expression vectors which are then transformed into appropriate host cells. These host cells are subjected to further high throughput screening methods to identify the cells with desired phenotypes.
Cell free display systems have been developed to exploit in vitro protein translation or cell free translation. These methods include mRNA display, ribosome display, covalent and non covalent DNA display, and in vitro compartmentalization.
Enzyme engineering
Enzyme engineering is the application of modifying an enzyme's structure (and, thus, its function) or modifying the catalytic activity of isolated enzymes to produce new metabolites, to allow new (catalyzed) pathways for reactions to occur, or to convert from certain compounds into others (biotransformation). These products are useful as chemicals, pharmaceuticals, fuel, food, or agricultural additives.
An enzyme reactor consists of a vessel containing a reactional medium that is used to perform a desired conversion by enzymatic means. Enzymes used in this process are free in the solution. Also Microorganisms are one of important origin for genuine enzymes .
Examples of engineered proteins
Computing methods have been used to design a protein with a novel fold, such as Top7, and sensors for unnatural molecules. The engineering of fusion proteins has yielded rilonacept, a pharmaceutical that has secured Food and Drug Administration (FDA) approval for treating cryopyrin-associated periodic syndrome.
Another computing method, IPRO, successfully engineered the switching of cofactor specificity of Candida boidinii xylose reductase. Iterative Protein Redesign and Optimization (IPRO) redesigns proteins to increase or give specificity to native or novel substrates and cofactors. This is done by repeatedly randomly perturbing the structure of the proteins around specified design positions, identifying the lowest energy combination of rotamers, and determining whether the new design has a lower binding energy than prior ones.
Computation-aided design has also been used to engineer complex properties of a highly ordered nano-protein assembly. A protein cage, E. coli bacterioferritin (EcBfr), which naturally shows structural instability and an incomplete self-assembly behavior by populating two oligomerization states, is the model protein in this study. Through computational analysis and comparison to its homologs, it has been found that this protein has a smaller-than-average dimeric interface on its two-fold symmetry axis due mainly to the existence of an interfacial water pocket centered on two water-bridged asparagine residues. To investigate the possibility of engineering EcBfr for modified structural stability, a semi-empirical computational method is used to virtually explore the energy differences of the 480 possible mutants at the dimeric interface relative to the wild type EcBfr. This computational study also converges on the water-bridged asparagines. Replacing these two asparagines with hydrophobic amino acids results in proteins that fold into alpha-helical monomers and assemble into cages as evidenced by circular dichroism and transmission electron microscopy. Both thermal and chemical denaturation confirm that, all redesigned proteins, in agreement with the calculations, possess increased stability. One of the three mutations shifts the population in favor of the higher order oligomerization state in solution as shown by both size exclusion chromatography and native gel electrophoresis.
A in silico method, PoreDesigner, was developed to redesign bacterial channel protein (OmpF) to reduce its 1 nm pore size to any desired sub-nm dimension. Transport experiments on the narrowest designed pores revealed complete salt rejection when assembled in biomimetic block-polymer matrices.
See also
Display:
Bacterial display
Phage display
mRNA display
Ribosome display
Yeast display
Biomolecular engineering
Enzymology
Expanded genetic code
Fast parallel proteolysis (FASTpp)
Gene synthesis
Genetic engineering
In situ cyclization of proteins
Nucleic acid analogues
Protein structure prediction software
Proteomics
Proteome
SCOPE (protein engineering)
Structural biology
Synthetic biology
References
External links
servers for protein engineering and related topics based on the WHAT IF software
Enzymes Built from Scratch – Researchers engineer never-before-seen catalysts using a new computational technique, Technology Review, March 10, 2008
Biochemistry
Enzymes
Biological engineering
Biotechnology
Chemical biology | 0.786843 | 0.982835 | 0.773337 |
Conceptual framework | A conceptual framework is an analytical tool with several variations and contexts. It can be applied in different categories of work where an overall picture is needed. It is used to make conceptual distinctions and organize ideas. Strong conceptual frameworks capture something real and do this in a way that is easy to remember and apply.
Examples
Isaiah Berlin used the metaphor of a "fox" and a "hedgehog" to make conceptual distinctions in how important philosophers and authors view the world. Berlin describes hedgehogs as those who use a single idea or organizing principle to view the world (such as Dante Alighieri, Blaise Pascal, Fyodor Dostoyevsky, Plato, Henrik Ibsen and Georg Wilhelm Friedrich Hegel). Foxes, on the other hand, incorporate a type of pluralism and view the world through multiple, sometimes conflicting, lenses (examples include Johann Wolfgang von Goethe, James Joyce, William Shakespeare, Aristotle, Herodotus, Molière, and Honoré de Balzac).
Economists use the conceptual framework of supply and demand to distinguish between the behavior and incentive systems of firms and consumers. Like many other conceptual frameworks, supply and demand can be presented through visual or graphical representations (see demand curve). Both political science and economics use principal agent theory as a conceptual framework. The politics-administration dichotomy is a long-standing conceptual framework used in public administration.
All three of these cases are examples of a macro level conceptual framework.
Overview
The use of the term conceptual framework crosses both scale (large and small theories) and contexts (social science, marketing, applied science, art etc.). The explicit definition of what a conceptual framework is and its application can therefore vary.
Conceptual frameworks are beneficial as organizing devices in empirical research. One set of scholars has applied the notion of a conceptual framework to deductive, empirical research at the micro- or individual study level. They employ American football plays as a useful metaphor to clarify the meaning of conceptual framework (used in the context of a deductive empirical study).
Likewise, conceptual frameworks are abstract representations, connected to the research project's goal that direct the collection and analysis of data (on the plane of observation – the ground). Critically, a football play is a "plan of action" tied to a particular, timely, purpose, usually summarized as long or short yardage. Shields and Rangarajan (2013) argue that it is this tie to "purpose" that makes American football plays such a good metaphor. They define a conceptual framework as "the way ideas are organized to achieve a research project's purpose". Like football plays, conceptual frameworks are connected to a research purpose or aim. Explanation is the most common type of research purpose employed in empirical research. The formal hypothesis of a scientific investigation is the framework associated with explanation.
Explanatory research usually focuses on "why" or "what caused" a phenomenon. Formal hypotheses posit possible explanations (answers to the why question) that are tested by collecting data and assessing the evidence (usually quantitative using statistical tests). For example, Kai Huang wanted to determine what factors contributed to residential fires in U.S. cities. Three factors were posited to influence residential fires. These factors (environment, population, and building characteristics) became the hypotheses or conceptual framework he used to achieve his purpose – explain factors that influenced home fires in U.S. cities.
Types
Several types of conceptual frameworks have been identified, and line up with a research purpose in the following ways:
Working hypothesis – exploration or exploratory research
Pillar questions – exploration or exploratory research
Descriptive categories – description or descriptive research
Practical ideal type – analysis (gauging)
Models of operations research – decision making
Formal hypothesis – explanation and prediction
Note that Shields and Rangarajan (2013) do not claim that the above are the only framework-purpose pairing. Nor do they claim the system is applicable to inductive forms of empirical research. Rather, the conceptual framework-research purpose pairings they propose are useful and provide new scholars a point of departure to develop their own research design.
Frameworks have also been used to explain conflict theory and the balance necessary to reach what amounts to resolution. Within these conflict frameworks, visible and invisible variables function under concepts of relevance. Boundaries form and within these boundaries, tensions regarding laws and chaos (or freedom) are mitigated. These frameworks often function like cells, with sub-frameworks, stasis, evolution and revolution. Anomalies may exist without adequate "lenses" or "filters" to see them and may become visible only when the tools exist to define them.
See also
Analogy
Inquiry
Conceptual model
Theory
References
Further reading
Shields, Patricia and Rangarajan, Nandhini. (2013). A Playbook for Research Methods: Integrating Conceptual Frameworks and Project Management. Stillwater, OK; New Forums Press
Research
Conceptual modelling | 0.77634 | 0.996126 | 0.773333 |
Le Chatelier's principle | In chemistry, Le Chatelier's principle (pronounced or ), also called Chatelier's principle, Braun–Le Chatelier principle, Le Chatelier–Braun principle or the equilibrium law, is a principle used to predict the effect of a change in conditions on chemical equilibrium.
The principle is named after French chemist Henry Louis Le Chatelier who enunciated the principle in 1884 by extending the reasoning from the Van 't Hoff relation of how temperature variations changes the equilibrium to the variations of pressure and what's now called chemical potential, and sometimes also credited to Karl Ferdinand Braun, who discovered it independently in 1887. It can be defined as:
In scenarios outside thermodynamic equilibrium, there can arise phenomena in contradiction to an over-general statement of Le Chatelier's principle.
Le Chatelier's principle is sometimes alluded to in discussions of topics other than thermodynamics.
Thermodynamic statement
Le Chatelier–Braun principle analyzes the qualitative behaviour of a thermodynamic system when a particular one of its externally controlled state variables, say changes by an amount the 'driving change', causing a change the 'response of prime interest', in its conjugate state variable all other externally controlled state variables remaining constant. The response illustrates 'moderation' in ways evident in two related thermodynamic equilibria. Obviously, one of has to be intensive, the other extensive. Also as a necessary part of the scenario, there is some particular auxiliary 'moderating' state variable , with its conjugate state variable For this to be of interest, the 'moderating' variable must undergo a change or in some part of the experimental protocol; this can be either by imposition of a change , or with the holding of constant, written For the principle to hold with full generality, must be extensive or intensive accordingly as is so. Obviously, to give this scenario physical meaning, the 'driving' variable and the 'moderating' variable must be subject to separate independent experimental controls and measurements.
Explicit statement
The principle can be stated in two ways, formally different, but substantially equivalent, and, in a sense, mutually 'reciprocal'. The two ways illustrate the Maxwell relations, and the stability of thermodynamic equilibrium according to the second law of thermodynamics, evident as the spread of energy amongst the state variables of the system in response to an imposed change.
The two ways of statement share an 'index' experimental protocol (denoted that may be described as 'changed driver, moderation permitted'. Along with the driver change it imposes a constant with and allows the uncontrolled 'moderating' variable response along with the 'index' response of interest
The two ways of statement differ in their respective compared protocols. One way posits a 'changed driver, no moderation' protocol (denoted The other way posits a 'fixed driver, imposed moderation' protocol (denoted )
'Driving' variable forced to change, 'moderating' variable allowed to respond; compared with 'driving' variable forced to change, 'moderating' variable forced not to change
This way compares with to compare the effects of the imposed the change with and without moderation. The protocol prevents 'moderation' by enforcing that through an adjustment and it observes the 'no-moderation' response Provided that the observed response is indeed that then the principle states that .
In other words, change in the 'moderating' state variable moderates the effect of the driving change in on the responding conjugate variable
'Driving' variable forced to change, 'moderating' variable allowed to respond; compared with 'driving' variable forced not to change, 'moderating' variable forced to change
This way also uses two experimental protocols, and , to compare the index effect with the effect of 'moderation' alone. The 'index' protocol is executed first; the response of prime interest, is observed, and the response of the 'moderating' variable is also measured. With that knowledge, then the 'fixed driver, moderation imposed' protocol enforces that with the driving variable held fixed; the protocol also, through an adjustment imposes a change (learnt from the just previous measurement) in the 'moderating' variable, and measures the change Provided that the 'moderated' response is indeed that then the principle states that the signs of and are opposite.
Again, in other words, change in the 'moderating' state variable opposes the effect of the driving change in on the responding conjugate variable
Other statements
The duration of adjustment depends on the strength of the negative feedback to the initial shock. The principle is typically used to describe closed negative-feedback systems, but applies, in general, to thermodynamically closed and isolated systems in nature, since the second law of thermodynamics ensures that the disequilibrium caused by an instantaneous shock is eventually followed by a new equilibrium.
While well rooted in chemical equilibrium, Le Chatelier's principle can also be used in describing mechanical systems in that a system put under stress will respond in such a way as to reduce or minimize that stress. Moreover, the response will generally be via the mechanism that most easily relieves that stress. Shear pins and other such sacrificial devices are design elements that protect systems against stress applied in undesired manners to relieve it so as to prevent more extensive damage to the entire system, a practical engineering application of Le Chatelier's principle.
Chemistry
Effect of change in concentration
Changing the concentration of a chemical will shift the equilibrium to the side that would counter that change in concentration. The chemical system will attempt to partly oppose the change affected to the original state of equilibrium. In turn, the rate of reaction, extent, and yield of products will be altered corresponding to the impact on the system.
This can be illustrated by the equilibrium of carbon monoxide and hydrogen gas, reacting to form methanol.
CO + 2 H2 ⇌ CH3OH
Suppose we were to increase the concentration of CO in the system. Using Le Chatelier's principle, we can predict that the concentration of methanol will increase, decreasing the total change in CO. If we are to add a species to the overall reaction, the reaction will favor the side opposing the addition of the species. Likewise, the subtraction of a species would cause the reaction to "fill the gap" and favor the side where the species was reduced. This observation is supported by the collision theory. As the concentration of CO is increased, the frequency of successful collisions of that reactant would increase also, allowing for an increase in forward reaction, and generation of the product. Even if the desired product is not thermodynamically favored, the end-product can be obtained if it is continuously removed from the solution.
The effect of a change in concentration is often exploited synthetically for condensation reactions (i.e., reactions that extrude water) that are equilibrium processes (e.g., formation of an ester from carboxylic acid and alcohol or an imine from an amine and aldehyde). This can be achieved by physically sequestering water, by adding desiccants like anhydrous magnesium sulfate or molecular sieves, or by continuous removal of water by distillation, often facilitated by a Dean-Stark apparatus.
Effect of change in temperature
The effect of changing the temperature in the equilibrium can be made clear by 1) incorporating heat as either a reactant or a product, and 2) assuming that an increase in temperature increases the heat content of a system. When the reaction is exothermic (ΔH is negative and energy is released), heat is included as a product, and when the reaction is endothermic (ΔH is positive and energy is consumed), heat is included as a reactant. Hence, whether increasing or decreasing the temperature would favor the forward or the reverse reaction can be determined by applying the same principle as with concentration changes.
Take, for example, the reversible reaction of nitrogen gas with hydrogen gas to form ammonia:
N2(g) + 3 H2(g) ⇌ 2 NH3(g) ΔH = −92 kJ mol−1
Because this reaction is exothermic, it produces heat:
N2(g) + 3 H2(g) ⇌ 2 NH3(g) + heat
If the temperature were increased, the heat content of the system would increase, so the system would consume some of that heat by shifting the equilibrium to the left, thereby producing less ammonia. More ammonia would be produced if the reaction were run at a lower temperature, but a lower temperature also lowers the rate of the process, so, in practice (the Haber process) the temperature is set at a compromise value that allows ammonia to be made at a reasonable rate with an equilibrium concentration that is not too unfavorable.
In exothermic reactions, an increase in temperature decreases the equilibrium constant, K, whereas in endothermic reactions, an increase in temperature increases K.
Le Chatelier's principle applied to changes in concentration or pressure can be understood by giving K a constant value. The effect of temperature on equilibria, however, involves a change in the equilibrium constant. The dependence of K on temperature is determined by the sign of ΔH. The theoretical basis of this dependence is given by the Van 't Hoff equation.
Effect of change in pressure
The equilibrium concentrations of the products and reactants do not directly depend on the total pressure of the system. They may depend on the partial pressure of the products and reactants, but if the number of moles of gaseous reactants is equal to the number of moles of gaseous products, pressure has no effect on equilibrium.
Changing total pressure by adding an inert gas at constant volume does not affect the equilibrium concentrations (see Effect of adding an inert gas below).
Changing total pressure by changing the volume of the system changes the partial pressures of the products and reactants and can affect the equilibrium concentrations (see §Effect of change in volume below).
Effect of change in volume
Changing the volume of the system changes the partial pressures of the products and reactants and can affect the equilibrium concentrations. With a pressure increase due to a decrease in volume, the side of the equilibrium with fewer moles is more favorable and with a pressure decrease due to an increase in volume, the side with more moles is more favorable. There is no effect on a reaction where the number of moles of gas is the same on each side of the chemical equation.
Considering the reaction of nitrogen gas with hydrogen gas to form ammonia:
⇌ ΔH = −92kJ mol−1
Note the number of moles of gas on the left-hand side and the number of moles of gas on the right-hand side. When the volume of the system is changed, the partial pressures of the gases change. If we were to decrease pressure by increasing volume, the equilibrium of the above reaction will shift to the left, because the reactant side has a greater number of moles than does the product side. The system tries to counteract the decrease in partial pressure of gas molecules by shifting to the side that exerts greater pressure. Similarly, if we were to increase pressure by decreasing volume, the equilibrium shifts to the right, counteracting the pressure increase by shifting to the side with fewer moles of gas that exert less pressure. If the volume is increased because there are more moles of gas on the reactant side, this change is more significant in the denominator of the equilibrium constant expression, causing a shift in equilibrium.
Effect of adding an inert gas
An inert gas (or noble gas), such as helium, is one that does not react with other elements or compounds. Adding an inert gas into a gas-phase equilibrium at constant volume does not result in a shift. This is because the addition of a non-reactive gas does not change the equilibrium equation, as the inert gas appears on both sides of the chemical reaction equation. For example, if A and B react to form C and D, but X does not participate in the reaction: \mathit{a}A{} + \mathit{b}B{} + \mathit{x}X <=> \mathit{c}C{} + \mathit{d}D{} + \mathit{x}X. While it is true that the total pressure of the system increases, the total pressure does not have any effect on the equilibrium constant; rather, it is a change in partial pressures that will cause a shift in the equilibrium. If, however, the volume is allowed to increase in the process, the partial pressures of all gases would be decreased resulting in a shift towards the side with the greater number of moles of gas. The shift will never occur on the side with fewer moles of gas. It is also known as Le Chatelier's postulate.
Effect of a catalyst
A catalyst increases the rate of a reaction without being consumed in the reaction. The use of a catalyst does not affect the position and composition of the equilibrium of a reaction, because both the forward and backward reactions are sped up by the same factor.
For example, consider the Haber process for the synthesis of ammonia (NH3):
N2 + 3 H2 ⇌ 2 NH3
In the above reaction, iron (Fe) and molybdenum (Mo) will function as catalysts if present. They will accelerate any reactions, but they do not affect the state of the equilibrium.
General statements
Thermodynamic equilibrium processes
Le Chatelier's principle refers to states of thermodynamic equilibrium. The latter are stable against perturbations that satisfy certain criteria; this is essential to the definition of thermodynamic equilibrium.
OR
It states that changes in the temperature, pressure, volume, or concentration of a system will result in predictable and opposing changes in the system in order to achieve a new equilibrium state.
For this, a state of thermodynamic equilibrium is most conveniently described through a fundamental relation that specifies a cardinal function of state, of the energy kind, or of the entropy kind, as a function of state variables chosen to fit the thermodynamic operations through which a perturbation is to be applied.
In theory and, nearly, in some practical scenarios, a body can be in a stationary state with zero macroscopic flows and rates of chemical reaction (for example, when no suitable catalyst is present), yet not in thermodynamic equilibrium, because it is metastable or unstable; then Le Chatelier's principle does not necessarily apply.
Non-equilibrium processes
A simple body or a complex thermodynamic system can also be in a stationary state with non-zero rates of flow and chemical reaction; sometimes the word "equilibrium" is used in reference to such a state, though by definition it is not a thermodynamic equilibrium state. Sometimes, it is proposed to consider Le Chatelier's principle for such states. For this exercise, rates of flow and of chemical reaction must be considered. Such rates are not supplied by equilibrium thermodynamics. For such states, there are no simple statements that echo Le Chatelier's principle. Prigogine and Defay demonstrate that such a scenario may exhibit moderation, or may exhibit a measured amount of anti-moderation, though not a run-away anti-moderation that goes to completion. The example analysed by Prigogine and Defay is the Haber process.
This situation is clarified by considering two basic methods of analysis of a process. One is the classical approach of Gibbs, the other uses the near- or local- equilibrium approach of De Donder. The Gibbs approach requires thermodynamic equilibrium. The Gibbs approach is reliable within its proper scope, thermodynamic equilibrium, though of course it does not cover non-equilibrium scenarios. The De Donder approach can cover equilibrium scenarios, but also covers non-equilibrium scenarios in which there is only local thermodynamic equilibrium, and not thermodynamic equilibrium proper. The De Donder approach allows state variables called extents of reaction to be independent variables, though in the Gibbs approach, such variables are not independent. Thermodynamic non-equilibrium scenarios can contradict an over-general statement of Le Chatelier's Principle.
Related system concepts
It is common to treat the principle as a more general observation of systems, such as
or, "roughly stated":
The concept of systemic maintenance of a stable steady state despite perturbations has a variety of names, and has been studied in a variety of contexts, chiefly in the natural sciences. In chemistry, the principle is used to manipulate the outcomes of reversible reactions, often to increase their yield. In pharmacology, the binding of ligands to receptors may shift the equilibrium according to Le Chatelier's principle, thereby explaining the diverse phenomena of receptor activation and desensitization. In biology, the concept of homeostasis is different from Le Chatelier's principle, in that homoeostasis is generally maintained by processes of active character, as distinct from the passive or dissipative character of the processes described by Le Chatelier's principle in thermodynamics. In economics, even further from thermodynamics, allusion to the principle is sometimes regarded as helping explain the price equilibrium of efficient economic systems. In some dynamic systems, the end-state cannot be determined from the shock or perturbation.
Economics
In economics, a similar concept also named after Le Chatelier was introduced by American economist Paul Samuelson in 1947. There the generalized Le Chatelier principle is for a maximum condition of economic equilibrium: Where all unknowns of a function are independently variable, auxiliary constraints—"just-binding" in leaving initial equilibrium unchanged—reduce the response to a parameter change. Thus, factor-demand and commodity-supply elasticities are hypothesized to be lower in the short run than in the long run because of the fixed-cost constraint in the short run.
Since the change of the value of an objective function in a neighbourhood of the maximum position is described by the envelope theorem, Le Chatelier's principle can be shown to be a corollary thereof.
See also
Homeostasis
Common-ion effect
Response reactions
References
Bibliography of cited sources
Bailyn, M. (1994). A Survey of Thermodynamics, American Institute of Physics Press, New York, .
Callen, H.B. (1960/1985). Thermodynamics and an Introduction to Thermostatistics, (1st edition 1960) 2nd edition 1985, Wiley, New York, .
Münster, A. (1970), Classical Thermodynamics, translated by E.S. Halberstadt, Wiley–Interscience, London, .
Prigogine, I., Defay, R. (1950/1954). Chemical Thermodynamics, translated by D.H. Everett, Longmans, Green & Co, London.
External links
YouTube video of Le Chatelier's principle and pressure
Equilibrium chemistry
Homeostasis | 0.776485 | 0.995938 | 0.773331 |
Svedberg | In chemistry, a Svedberg unit or svedberg (symbol S, sometimes Sv) is a non-SI metric unit for sedimentation coefficients. The Svedberg unit offers a measure of a particle's size indirectly based on its sedimentation rate under acceleration (i.e. how fast a particle of given size and shape settles out of suspension). The svedberg is a measure of time, defined as exactly 10−13 seconds (100 fs).
For biological macromolecules and cell organelles like ribosomes, the sedimentation rate is typically measured as the rate of travel in a centrifuge tube subjected to high g-force.
The svedberg (S) is distinct from the SI unit sievert or the non-SI unit sverdrup, which also use the symbol Sv, and to the SI unit Siemens which uses the symbol S too.
Naming
The unit is named after the Swedish chemist Theodor Svedberg (1884–1971), winner of the 1926 Nobel Prize in chemistry for his work on disperse systems, colloids and his invention of the ultracentrifuge.
Factors
The Svedberg coefficient is a nonlinear function. A particle's mass, density, and shape will determine its S value. The S value depends on the frictional forces retarding its movement, which, in turn, are related to the average cross-sectional area of the particle.
The sedimentation coefficient is the ratio of the speed of a substance in a centrifuge to its acceleration in comparable units. A substance with a sedimentation coefficient of 26S will travel at 26 micrometers per second under the influence of an acceleration of a million gravities (107 m/s2). Centrifugal acceleration is given as rω; where r is the radial distance from the rotation axis and ω is the angular velocity in radians per second.
Bigger particles tend to sediment faster and so have higher Svedberg values.
Svedberg units are not directly additive since they represent a rate of sedimentation, not weight.
Use
In centrifugation of small biochemical species, a convention has developed in which sedimentation coefficients are expressed in the Svedberg units.
The svedberg is the most important measure used to distinguish ribosomes. Ribosomes are composed of two complex subunits, each including rRNA and protein components. In prokaryotes (including bacteria), the subunits are named 30S and 50S for their "size" in Svedberg units. These subunits are made up of three forms of rRNA: 16S, 23S, and 5S and ribosomal proteins.
For bacterial ribosomes, ultracentrifugation yields intact ribosomes (70S) as well as separated ribosomal subunits, the large subunit (50S) and the small subunit (30S). Within cells, ribosomes normally exist as a mixture of joined and separate subunits. The largest particles (whole ribosomes) sediment near the bottom of the tube, whereas the smaller particles (separated 50S and 30S subunits) appear in upper fractions.
See also
Sedimentation coefficient
Differential centrifugation
Footnotes
References
External links
Svedberg unit - nobelprize.org
Units of time
Non-SI metric units | 0.783913 | 0.986494 | 0.773325 |
Philosophical methodology | Philosophical methodology encompasses the methods used to philosophize and the study of these methods. Methods of philosophy are procedures for conducting research, creating new theories, and selecting between competing theories. In addition to the description of methods, philosophical methodology also compares and evaluates them.
Philosophers have employed a great variety of methods. Methodological skepticism tries to find principles that cannot be doubted. The geometrical method deduces theorems from self-evident axioms. The phenomenological method describes first-person experience. Verificationists study the conditions of empirical verification of sentences to determine their meaning. Conceptual analysis decomposes concepts into fundamental constituents. Common-sense philosophers use widely held beliefs as their starting point of inquiry, whereas ordinary language philosophers extract philosophical insights from ordinary language. Intuition-based methods, like thought experiments, rely on non-inferential impressions. The method of reflective equilibrium seeks coherence among beliefs, while the pragmatist method assesses theories by their practical consequences. The transcendental method studies the conditions without which an entity could not exist. Experimental philosophers use empirical methods.
The choice of method can significantly impact how theories are constructed and the arguments used to support them. As a result, methodological disagreements can lead to philosophical disagreements.
Definition
The term "philosophical methodology" refers either to the methods used to philosophize or to the branch of metaphilosophy studying these methods. A method is a way of doing things, such as a set of actions or decisions, in order to achieve a certain goal, when used under the right conditions. In the context of inquiry, a method is a way of conducting one's research and theorizing, like inductive or axiomatic methods in logic or experimental methods in the sciences. Philosophical methodology studies the methods of philosophy. It is not primarily concerned with whether a philosophical position, such as metaphysical dualism or utilitarianism, is true or false. Instead, it asks how one can determine which position should be adopted.
In the widest sense, any principle for choosing between competing theories may be considered as part of the methodology of philosophy. In this sense, the philosophical methodology is "the general study of criteria for theory selection". For example, Occam’s Razor is a methodological principle of theory selection favoring simple over complex theories. A closely related aspect of philosophical methodology concerns the question of which conventions one needs to adopt necessarily to succeed at theory making. But in a more narrow sense, only guidelines that help philosophers learn about facts studied by philosophy qualify as philosophical methods. This is the more common sense, which applies to most of the methods listed in this article. In this sense, philosophical methodology is closely related to epistemology in that it consists in epistemological methods that enable philosophers to arrive at knowledge. Because of this, the problem of the methods of philosophy is central to how philosophical claims are to be justified.
An important difference in philosophical methodology concerns the distinction between descriptive and normative questions. Descriptive questions ask what methods philosophers actually use or used in the past, while normative questions ask what methods they should use. The normative aspect of philosophical methodology expresses the idea that there is a difference between good and bad philosophy. In this sense, philosophical methods either articulate the standards of evaluation themselves or the practices that ensure that these standards are met. Philosophical methods can be understood as tools that help the theorist do good philosophy and arrive at knowledge. The normative question of philosophical methodology is quite controversial since different schools of philosophy often have very different views on what constitutes good philosophy and how to achieve it.
Methods
A great variety of philosophical methods has been proposed. Some of these methods were developed as a reaction to other methods, for example, to counter skepticism by providing a secure path to knowledge. In other cases, one method may be understood as a development or a specific application of another method. Some philosophers or philosophical movements give primacy to one specific method, while others use a variety of methods depending on the problem they are trying to solve. It has been argued that many of the philosophical methods are also commonly used implicitly in more crude forms by regular people and are only given a more careful, critical, and systematic exposition in philosophical methodology.
Methodological skepticism
Methodological skepticism, also referred to as Cartesian doubt, uses systematic doubt as a method of philosophy. It is motivated by the search for an absolutely certain foundation of knowledge. The method for finding these foundations is doubt: only that which is indubitable can serve this role. While this approach has been influential, it has also received various criticisms. One problem is that it has proven very difficult to find such absolutely certain claims if the doubt is applied in its most radical form. Another is that while absolute certainty may be desirable, it is by no means necessary for knowledge. In this sense, it excludes too much and seems to be unwarranted and arbitrary, since it is not clear why very certain theorems justified by strong arguments should be abandoned just because they are not absolutely certain. This can be seen in relation to the insights discovered by the empirical sciences, which have proven very useful even though they are not indubitable.
Geometrical method
The geometrical method came to particular prominence through rationalists like Baruch Spinoza. It starts from a small set of self-evident axioms together with relevant definitions and tries to deduce a great variety of theorems from this basis, thereby mirroring the methods found in geometry. Historically, it can be understood as a response to methodological skepticism: it consists in trying to find a foundation of certain knowledge and then expanding this foundation through deductive inferences. The theorems arrived at this way may be challenged in two ways. On the one hand, they may be derived from axioms that are not as self-evident as their defenders proclaim and thereby fail to inherit the status of absolute certainty. For example, many philosophers have rejected the claim of self-evidence concerning one of René Descartes's first principles stating that "he can know that whatever he perceives clearly and distinctly is true only if he first knows that God exists and is not a deceiver". Another example is the causal axiom of Spinoza's system that "the knowledge of an effect depends on and involves knowledge of its cause", which has been criticized in various ways. In this sense, philosophical systems built using the geometrical method are open to criticisms that reject their basic axioms. A different form of objection holds that the inference from the axioms to the theorems may be faulty, for example, because it does not follow a rule of inference or because it includes implicitly assumed premises that are not themselves self-evident.
Phenomenological method
Phenomenology is the science of appearances - broadly speaking, the science of phenomenon, given that almost all phenomena are perceived. The phenomenological method aims to study the appearances themselves and the relations found between them. This is achieved through the so-called phenomenological reduction, also known as epoché or bracketing: the researcher suspends their judgments about the natural external world in order to focus exclusively on the experience of how things appear to be, independent of whether these appearances are true or false. One idea behind this approach is that our presuppositions of what things are like can get in the way of studying how they appear to be and thereby mislead the researcher into thinking they know the answer instead of looking for themselves. The phenomenological method can also be seen as a reaction to methodological skepticism since its defenders traditionally claimed that it could lead to absolute certainty and thereby help philosophy achieve the status of a rigorous science. But phenomenology has been heavily criticized because of this overly optimistic outlook concerning the certainty of its insights. A different objection to the method of phenomenological reduction holds that it involves an artificial stance that gives too much emphasis on the theoretical attitude at the expense of feeling and practical concerns.
Another phenomenological method is called "eidetic variation". It is used to study the essences of things. This is done by imagining an object of the kind under investigation. The features of this object are then varied in order to see whether the resulting object still belongs to the investigated kind. If the object can survive the change of a certain feature then this feature is inessential to this kind. Otherwise, it belongs to the kind's essence. For example, when imagining a triangle, one can vary its features, like the length of its sides or its color. These features are inessential since the changed object is still a triangle, but it ceases to be a triangle if a fourth side is added.
Verificationism
The method of verificationism consists in understanding sentences by analyzing their characteristic conditions of verification, i.e. by determining which empirical observations would prove them to be true. A central motivation behind this method has been to distinguish meaningful from meaningless sentences. This is sometimes expressed through the claim that "[the] meaning of a statement is the method of its verification". Meaningful sentences, like the ones found in the natural sciences, have clear conditions of empirical verification. But since most metaphysical sentences cannot be verified by empirical observations, they are deemed to be non-sensical by verificationists. Verificationism has been criticized on various grounds. On the one hand, it has proved very difficult to give a precise formulation that includes all scientific claims, including the ones about unobservables. This is connected to the problem of underdetermination in the philosophy of science: the problem that the observational evidence is often insufficient to determine which theory is true. This would lead to the implausible conclusion that even for the empirical sciences, many of their claims would be meaningless. But on a deeper level, the basic claim underlying verificationism seems itself to be meaningless by its own standards: it is not clear what empirical observations could verify the claim that the meaning of a sentence is the method of its verification. In this sense, verificationism would be contradictory by directly refuting itself. These and other problems have led some theorists, especially from the sciences, to adopt falsificationism instead. It is a less radical approach that holds that serious theories or hypotheses should at least be falsifiable, i.e. there should be some empirical observations that could prove them wrong.
Conceptual analysis
The goal of conceptual analysis is to decompose or analyze a given concept into its fundamental constituents. It consists in considering a philosophically interesting concept, like knowledge, and determining the necessary and sufficient conditions for whether the application of this concept is true. The resulting claim about the relation between the concept and its constituents is normally seen as knowable a priori since it is true only in virtue of the involved concepts and thereby constitutes an analytic truth. Usually, philosophers use their own intuitions to determine whether a concept is applicable to a specific situation to test their analyses. But other approaches have also been utilized by using not the intuitions of philosophers but of regular people, an approach often defended by experimental philosophers.
G. E. Moore proposed that the correctness of a conceptual analysis can be tested using the open question method. According to this view, asking whether the decomposition fits the concept should result in a closed or pointless question. If it results in an open or intelligible question, then the analysis does not exactly correspond to what we have in mind when we use the term. This can be used, for example, to reject the utilitarian claim that "goodness" is "whatever maximizes happiness". The underlying argument is that the question "Is what is good what maximizes happiness?" is an open question, unlike the question "Is what is good what is good?", which is a closed question. One problem with this approach is that it results in a very strict conception of what constitutes a correct conceptual analysis, leading to the conclusion that many concepts, like "goodness", are simple or indefinable.
Willard Van Orman Quine criticized conceptual analysis as part of his criticism of the analytic-synthetic distinction. This objection is based on the idea that all claims, including how concepts are to be decomposed, are ultimately based on empirical evidence. Another problem with conceptual analysis is that it is often very difficult to find an analysis of a concept that really covers all its cases. For this reason, Rudolf Carnap has suggested a modified version that aims to cover only the most paradigmatic cases while excluding problematic or controversial cases. While this approach has become more popular in recent years, it has also been criticized based on the argument that it tends to change the subject rather than resolve the original problem. In this sense, it is closely related to the method of conceptual engineering, which consists in redefining concepts in fruitful ways or developing new interesting concepts. This method has been applied, for example, to the concepts of gender and race.
Common sense
The method of common sense is based on the fact that we already have a great variety of beliefs that seem very certain to us, even if we do not believe them based on explicit arguments. Common sense philosophers use these beliefs as their starting point of philosophizing. This often takes the form of criticism directed against theories whose premises or conclusions are very far removed from how the average person thinks about the issue in question. G. E. Moore, for example, rejects J. M. E. McTaggart's sophisticated argumentation for the unreality of time based on his common-sense impression that time exists. He holds that his simple common-sense impression is much more certain than that McTaggart's arguments are sound, even though Moore was unable to pinpoint where McTaggart's arguments went wrong. According to his method, common sense constitutes an evidence base. This base may be used to eliminate philosophical theories that stray too far away from it, that are abstruse from its perspective. This can happen because either the theory itself or consequences that can be drawn from it violate common sense. For common sense philosophers, it is not the task of philosophy to question common sense. Instead, they should analyze it to formulate theories in accordance with it.
One important argument against this method is that common sense has often been wrong in the past, as is exemplified by various scientific discoveries. This suggests that common sense is in such cases just an antiquated theory that is eventually eliminated by the progress of science. For example, Albert Einstein's theory of relativity constitutes a radical departure from the common-sense conception of space and time, and quantum physics poses equally serious problems to how we tend to think about how elementary particles behave. This puts into question that common sense is a reliable source of knowledge. Another problem is that for many issues, there is no one universally accepted common-sense opinion. In such cases, common sense only amounts to the majority opinion, which should not be blindly accepted by researchers. This problem can be approached by articulating a weaker version of the common-sense method. One such version is defended by Roderick Chisholm, who allows that theories violating common sense may still be true. He contends that, in such cases, the theory in question is prima facie suspect and the burden of proof is always on its side. But such a shift in the burden of proof does not constitute a blind belief in common sense since it leaves open the possibility that, for various issues, there is decisive evidence against the common-sense opinion.
Ordinary language philosophy
The method of ordinary language philosophy consists in tackling philosophical questions based on how the related terms are used in ordinary language. In this sense, it is related to the method of common sense but focuses more on linguistic aspects. Some types of ordinary language philosophy only take a negative form in that they try to show how philosophical problems are not real problems at all. Instead, it is aimed to show that false assumptions, to which humans are susceptible due to the confusing structure of natural language, are responsible for this false impression. Other types take more positive approaches by defending and justifying philosophical claims, for example, based on what sounds insightful or odd to the average English speaker.
One problem for ordinary language philosophy is that regular speakers may have many different reasons for using a certain expression. Sometimes they intend to express what they believe, but other times they may be motivated by politeness or other conversational norms independent of the truth conditions of the expressed sentences. This significantly complicates ordinary language philosophy, since philosophers have to take the specific context of the expression into account, which may considerably alter its meaning. This criticism is partially mitigated by J. L. Austin's approach to ordinary language philosophy. According to him, ordinary language already has encoded many important distinctions and is our point of departure in theorizing. But "ordinary language is not the last word: in principle, it can everywhere be supplemented and improved upon and superseded". However, it also falls prey to another criticism: that it is often not clear how to distinguish ordinary from non-ordinary language. This makes it difficult in all but the paradigmatic cases to decide whether a philosophical claim is or is not supported by ordinary language.
Intuition and thought experiments
Methods based on intuition, like ethical intuitionism, use intuitions to evaluate whether a philosophical claim is true or false. In this context, intuitions are seen as a non-inferential source of knowledge: they consist in the impression of correctness one has when considering a certain claim. They are intellectual seemings that make it appear to the thinker that the considered proposition is true or false without the need to consider arguments for or against the proposition. This is sometimes expressed by saying that the proposition in question is self-evident. Examples of such propositions include "torturing a sentient being for fun is wrong" or "it is irrational to believe both something and its opposite". But not all defenders of intuitionism restrict intuitions to self-evident propositions. Instead, often weaker non-inferential impressions are also included as intuitions, such as a mother's intuition that her child is innocent of a certain crime.
Intuitions can be used in various ways as a philosophical method. On the one hand, philosophers may consult their intuitions in relation to very general principles, which may then be used to deduce further theorems. Another technique, which is often applied in ethics, consists in considering concrete scenarios instead of general principles. This often takes the form of thought experiments, in which certain situations are imagined with the goal of determining the possible consequences of the imagined scenario. These consequences are assessed using intuition and counterfactual thinking. For this reason, thought experiments are sometimes referred to as intuition pumps: they activate the intuitions concerning the specific situation, which may then be generalized to arrive at universal principles. In some cases, the imagined scenario is physically possible but it would not be feasible to make an actual experiment due to the costs, negative consequences, or technological limitations. But other thought experiments even work with scenarios that defy what is physically possible. It is controversial to what extent thought experiments merit to be characterized as real experiments and whether the insights they provide are reliable.
One problem with intuitions in general and thought experiments in particular consists in assessing their epistemological status, i.e. whether, how much, and in which circumstances they provide justification in comparison to other sources of knowledge. Some of its defenders claim that intuition is a reliable source of knowledge just like perception, with the difference being that it happens without the sensory organs. Others compare it not to perception but to the cognitive ability to evaluate counterfactual conditionals, which may be understood as the capacity to answer what-if questions. But the reliability of intuitions has been contested by its opponents. For example, wishful thinking may be the reason why it intuitively seems to a person that a proposition is true without providing any epistemological support for this proposition. Another objection, often raised in the empirical and naturalist tradition, is that intuitions do not constitute a reliable source of knowledge since the practitioner restricts themselves to an inquiry from their armchair instead of looking at the world to make empirical observations.
Reflective equilibrium
Reflective equilibrium is a state in which a thinker has the impression that they have considered all the relevant evidence for and against a theory and have made up their mind on this issue. It is a state of coherent balance among one's beliefs. This does not imply that all the evidence has really been considered, but it is tied to the impression that engaging in further inquiry is unlikely to make one change one's mind, i.e. that one has reached a stable equilibrium. In this sense, it is the endpoint of the deliberative process on the issue in question. The philosophical method of reflective equilibrium aims at reaching this type of state by mentally going back and forth between all relevant beliefs and intuitions. In this process, the thinker may have to let go of some beliefs or deemphasize certain intuitions that do not fit into the overall picture in order to progress.
In this wide sense, reflective equilibrium is connected to a form of coherentism about epistemological justification and is thereby opposed to foundationalist attempts at finding a small set of fixed and unrevisable beliefs from which to build one's philosophical theory. One problem with this wide conception of the reflective equilibrium is that it seems trivial: it is a truism that the rational thing to do is to consider all the evidence before making up one's mind and to strive towards building a coherent perspective. But as a method to guide philosophizing, this is usually too vague to provide specific guidance.
When understood in a more narrow sense, the method aims at finding an equilibrium between particular intuitions and general principles. On this view, the thinker starts with intuitions about particular cases and formulates general principles that roughly reflect these intuitions. The next step is to deal with the conflicts between the two by adjusting both the intuitions and the principles to reconcile them until an equilibrium is reached. One problem with this narrow interpretation is that it depends very much on the intuitions one started with. This means that different philosophers may start with very different intuitions and may therefore be unable to find a shared equilibrium. For example, the narrow method of reflective equilibrium may lead some moral philosophers towards utilitarianism and others towards Kantianism.
Pragmatic method
The pragmatic method assesses the truth or falsity of theories by looking at the consequences of accepting them. In this sense, "[t]he test of truth is utility: it's true if it works". Pragmatists approach intractable philosophical disputes in a down-to-earth fashion by asking about the concrete consequences associated, for example, with whether an abstract metaphysical theory is true or false. This is also intended to clarify the underlying issues by spelling out what would follow from them. Another goal of this approach is to expose pseudo-problems, which involve a merely verbal disagreement without any genuine difference on the level of the consequences between the competing standpoints.
Succinct summaries of the pragmatic method base it on the pragmatic maxim, of which various versions exist. An important version is due to Charles Sanders Peirce: "Consider what effects, which might conceivably have practical bearings, we conceive the object of our conception to have. Then, our conception of those effects is the whole of our conception of the object." Another formulation is due to William James: "To develop perfect clearness in our thoughts of an object, then, we need only consider what effects of a conceivable practical kind the object may involve – what sensations we are to expect from it and what reactions we must prepare". Various criticisms to the pragmatic method have been raised. For example, it is commonly rejected that the terms "true" and "useful" mean the same thing. A closely related problem is that believing in a certain theory may be useful to one person and useless to another, which would mean the same theory is both true and false.
Transcendental method
The transcendental method is used to study phenomena by reflecting on the conditions of possibility of these phenomena. This method usually starts out with an obvious fact, often about our mental life, such as what we know or experience. It then goes on to argue that for this fact to obtain, other facts also have to obtain: they are its conditions of possibility. This type of argument is called "transcendental argument": it argues that these additional assumptions also have to be true because otherwise, the initial fact would not be the case. For example, it has been used to argue for the existence of an external world based on the premise that the experience of the temporal order of our mental states would not be possible otherwise. Another example argues in favor of a description of nature in terms of concepts such as motion, force, and causal interaction based on the claim that an objective account of nature would not be possible otherwise.
Transcendental arguments have faced various challenges. On the one hand, the claim that the belief in a certain assumption is necessary for the experience of a certain entity is often not obvious. So in the example above, critics can argue against the transcendental argument by denying the claim that an external world is necessary for the experience of the temporal order of our mental states. But even if this point is granted, it does not guarantee that the assumption itself is true. So even if the belief in a given proposition is a psychological necessity for a certain experience, it does not automatically follow that this belief itself is true. Instead, it could be the case that humans are just wired in such a way that they have to believe in certain false assumptions.
Experimental philosophy
Experimental philosophy is the most recent development of the methods discussed in this article: it began only in the early years of the 21st century. Experimental philosophers try to answer philosophical questions by gathering empirical data. It is an interdisciplinary approach that applies the methods of psychology and the cognitive sciences to topics studied by philosophy. This usually takes the form of surveys probing the intuitions of ordinary people and then drawing conclusions from the findings. For example, one such inquiry came to the conclusion that justified true belief may be sufficient for knowledge despite various Gettier cases claiming to show otherwise. The method of experimental philosophy can be used both in a negative or a positive program. As a negative program, it aims to challenge traditional philosophical movements and positions. This can be done, for example, by showing how the intuitions used to defend certain claims vary a lot depending on factors such as culture, gender, or ethnicity. This variation casts doubt on the reliability of the intuitions and thereby also on theories supported by them. As a positive program, it uses empirical data to support its own philosophical claims. It differs from other philosophical methods in that it usually studies the intuitions of ordinary people and uses them, and not the experts' intuitions, as philosophical evidence.
One problem for both the positive and the negative approaches is that the data obtained from surveys do not constitute hard empirical evidence since they do not directly express the intuitions of the participants. The participants may react to subtle pragmatic cues in giving their answers, which brings with it the need for further interpretation in order to get from the given answers to the intuitions responsible for these answers. Another problem concerns the question of how reliable the intuitions of ordinary people on the often very technical issues are. The core of this objection is that, for many topics, the opinions of ordinary people are not very reliable since they have little familiarity with the issues themselves and the underlying problems they may pose. For this reason, it has been argued that they cannot replace the expert intuitions found in trained philosophers. Some critics have even argued that experimental philosophy does not really form part of philosophy. This objection does not reject that the method of experimental philosophy has value, it just rejects that this method belongs to philosophical methodology.
Others
Various other philosophical methods have been proposed. The Socratic method or Socratic debate is a form of cooperative philosophizing in which one philosopher usually first states a claim, which is then scrutinized by their interlocutor by asking them questions about various related claims, often with the implicit goal of putting the initial claim into doubt. It continues to be a popular method for teaching philosophy. Plato and Aristotle emphasize the role of wonder in the practice of philosophy. On this view, "philosophy begins in wonder" and "[i]t was their wonder, astonishment, that first led men to philosophize and still leads them". This position is also adopted in the more recent philosophy of Nicolai Hartmann. Various other types of methods were discussed in ancient Greek philosophy, like analysis, synthesis, dialectics, demonstration, definition, and reduction to absurdity. The medieval philosopher Thomas Aquinas identifies composition and division as ways of forming propositions while he sees invention and judgment as forms of reasoning from the known to the unknown.
Various methods for the selection between competing theories have been proposed. They often focus on the theoretical virtues of the involved theories. One such method is based on the idea that, everything else being equal, the simpler theory is to be preferred. Another gives preference to the theory that provides the best explanation. According to the method of epistemic conservatism, we should, all other things being equal, prefer the theory which, among its competitors, is the most conservative, i.e. the one closest to the beliefs we currently hold. One problem with these methods of theory selection is that it is usually not clear how the different virtues are to be weighted, often resulting in cases where they are unable to resolve disputes between competing theories that excel at different virtues.
Methodological naturalism holds that all philosophical claims are synthetic claims that ultimately depend for their justification or rejection on empirical observational evidence. In this sense, philosophy is continuous with the natural sciences in that they both give priority to the scientific method for investigating all areas of reality.
According to truthmaker theorists, every true proposition is true because another entity, its truthmaker, exists. This principle can be used as a methodology to critically evaluate philosophical theories. In particular, this concerns theories that accept certain truths but are unable to provide their truthmaker. Such theorists are derided as ontological cheaters. For example, this can be applied to philosophical presentism, the view that nothing outside the present exists. Philosophical presentists usually accept the very common belief that dinosaurs existed but have trouble in providing a truthmaker for this belief since they deny existence to past entities.
In philosophy, the term "genealogical method" refers to a form of criticism that tries to expose commonly held beliefs by uncovering their historical origin and function. For example, it may be used to reject specific moral claims or the status of truth by giving a concrete historical reconstruction of how their development was contingent on power relations in society. This is usually accompanied by the assertion that these beliefs were accepted and became established, because of non-rational considerations, such as because they served the interests of a predominant class.
Disagreements and influence
The disagreements within philosophy do not only concern which first-order philosophical claims are true, they also concern the second-order issue of which philosophical methods to use. One way to evaluate philosophical methods is to assess how well they do at solving philosophical problems. The question of the nature of philosophy has important implications for which methods of inquiry are appropriate to philosophizing. Seeing philosophy as an empirical science brings its methods much closer to the methods found in the natural sciences. Seeing it as the attempt to clarify concepts and increase understanding, on the other hand, usually leads to a methodology much more focused on apriori reasoning. In this sense, philosophical methodology is closely tied up with the question of how philosophy is to be defined. Different conceptions of philosophy often associated it with different goals, leading to certain methods being more or less suited to reach the corresponding goal.
The interest in philosophical methodology has risen a lot in contemporary philosophy. But some philosophers reject its importance by emphasizing that "preoccupation with questions about methods tends to distract us from prosecuting the methods themselves". However, such objections are often dismissed by pointing out that philosophy is at its core a reflective and critical enterprise, which is perhaps best exemplified by its preoccupation with its own methods. This is also backed up by the arguments to the effect that one's philosophical method has important implications for how one does philosophy and which philosophical claims one accepts or rejects. Since philosophy also studies the methodology of other disciplines, such as the methods of science, it has been argued that the study of its own methodology is an essential part of philosophy.
In several instances in the history of philosophy, the discovery of a new philosophical method, such as Cartesian doubt or the phenomenological method, has had important implications both on how philosophers conducted their theorizing and what claims they set out to defend. In some cases, such discoveries led the involved philosophers to overly optimistic outlooks, seeing them as historic breakthroughs that would dissolve all previous disagreements in philosophy.
Relation to other fields
Science
The methods of philosophy differ in various respects from the methods found in the natural sciences. One important difference is that philosophy does not use experimental data obtained through measuring equipment like telescopes or cloud chambers to justify its claims. For example, even philosophical naturalists emphasizing the close relation between philosophy and the sciences mostly practice a form of armchair theorizing instead of gathering empirical data. Experimental philosophers are an important exception: they use methods found in social psychology and other empirical sciences to test their claims.
One reason for the methodological difference between philosophy and science is that philosophical claims are usually more speculative and cannot be verified or falsified by looking through a telescope. This problem is not solved by citing works published by other philosophers, since it only defers the question of how their insights are justified. An additional complication concerning testimony is that different philosophers often defend mutually incompatible claims, which poses the challenge of how to select between them. Another difference between scientific and philosophical methodology is that there is wide agreement among scientists concerning their methods, testing procedures, and results. This is often linked to the fact that science has seen much more progress than philosophy.
Epistemology
An important goal of philosophical methods is to assist philosophers in attaining knowledge. This is often understood in terms of evidence. In this sense, philosophical methodology is concerned with the questions of what constitutes philosophical evidence, how much support it offers, and how to acquire it. In contrast to the empirical sciences, it is often claimed that empirical evidence is not used in justifying philosophical theories, that philosophy is less about the empirical world and more about how we think about the empirical world. In this sense, philosophy is often identified with conceptual analysis, which is concerned with explaining concepts and showing their interrelations. Philosophical naturalists often reject this line of thought and hold that empirical evidence can confirm or disconfirm philosophical theories, at least indirectly.
Philosophical evidence, which may be obtained, for example, through intuitions or thought experiments, is central for justifying basic principles and axioms. These principles can then be used as premises to support further conclusions. Some approaches to philosophical methodology emphasize that these arguments have to be deductively valid, i.e. that the truth of their premises ensures the truth of their conclusion. In other cases, philosophers may commit themselves to working hypotheses or norms of investigation even though they lack sufficient evidence. Such assumptions can be quite fruitful in simplifying the possibilities the philosopher needs to consider and by guiding them to ask interesting questions. But the lack of evidence makes this type of enterprise vulnerable to criticism.
See also
Scholarly method
Scientific method
Historical method
Dialectic
References
External links
Metaphilosophy | 0.781385 | 0.989667 | 0.773311 |
Empiricism | In philosophy, empiricism is an epistemological view which holds that true knowledge or justification comes only or primarily from sensory experience and empirical evidence. It is one of several competing views within epistemology, along with rationalism and skepticism. Empiricists argue that empiricism is a more reliable method of finding the truth than purely using logical reasoning, because humans have cognitive biases and limitations which lead to errors of judgement. Empiricism emphasizes the central role of empirical evidence in the formation of ideas, rather than innate ideas or traditions. Empiricists may argue that traditions (or customs) arise due to relations of previous sensory experiences.
Historically, empiricism was associated with the "blank slate" concept (tabula rasa), according to which the human mind is "blank" at birth and develops its thoughts only through later experience.
Empiricism in the philosophy of science emphasizes evidence, especially as discovered in experiments. It is a fundamental part of the scientific method that all hypotheses and theories must be tested against observations of the natural world rather than resting solely on a priori reasoning, intuition, or revelation.
Empiricism, often used by natural scientists, believes that "knowledge is based on experience" and that "knowledge is tentative and probabilistic, subject to continued revision and falsification". Empirical research, including experiments and validated measurement tools, guides the scientific method.
Etymology
The English term empirical derives from the Ancient Greek word ἐμπειρία, empeiria, which is cognate with and translates to the Latin experientia, from which the words experience and experiment are derived.
Background
A central concept in science and the scientific method is that conclusions must be empirically based on the evidence of the senses. Both natural and social sciences use working hypotheses that are testable by observation and experiment. The term semi-empirical is sometimes used to describe theoretical methods that make use of basic axioms, established scientific laws, and previous experimental results to engage in reasoned model building and theoretical inquiry.
Philosophical empiricists hold no knowledge to be properly inferred or deduced unless it is derived from one's sense-based experience. In epistemology (theory of knowledge) empiricism is typically contrasted with rationalism, which holds that knowledge may be derived from reason independently of the senses, and in the philosophy of mind it is often contrasted with innatism, which holds that some knowledge and ideas are already present in the mind at birth. However, many Enlightenment rationalists and empiricists still made concessions to each other. For example, the empiricist John Locke admitted that some knowledge (e.g. knowledge of God's existence) could be arrived at through intuition and reasoning alone. Similarly, Robert Boyle, a prominent advocate of the experimental method, held that we also have innate ideas. At the same time, the main continental rationalists (Descartes, Spinoza, and Leibniz) were also advocates of the empirical "scientific method".
History
Early empiricism
Between 600 and 200 BCE, the Vaisheshika school of Hindu philosophy, founded by the ancient Indian philosopher Kanada, accepted perception and inference as the only two reliable sources of knowledge. This is enumerated in his work Vaiśeṣika Sūtra. The Charvaka school held similar beliefs, asserting that perception is the only reliable source of knowledge while inference obtains knowledge with uncertainty.
The earliest Western proto-empiricists were the empiric school of ancient Greek medical practitioners, founded in 330 BCE. Its members rejected the doctrines of the dogmatic school, preferring to rely on the observation of phantasiai (i.e., phenomena, the appearances). The Empiric school was closely allied with the Pyrrhonist school of philosophy, which made the philosophical case for their proto-empiricism.
The notion of tabula rasa ("clean slate" or "blank tablet") connotes a view of the mind as an originally blank or empty recorder (Locke used the words "white paper") on which experience leaves marks. This denies that humans have innate ideas. The notion dates back to Aristotle, :
Aristotle's explanation of how this was possible was not strictly empiricist in a modern sense, but rather based on his theory of potentiality and actuality, and experience of sense perceptions still requires the help of the active nous. These notions contrasted with Platonic notions of the human mind as an entity that pre-existed somewhere in the heavens, before being sent down to join a body on Earth (see Plato's Phaedo and Apology, as well as others). Aristotle was considered to give a more important position to sense perception than Plato, and commentators in the Middle Ages summarized one of his positions as "nihil in intellectu nisi prius fuerit in sensu" (Latin for "nothing in the intellect without first being in the senses").
This idea was later developed in ancient philosophy by the Stoic school, from about 330 BCE. Stoic epistemology generally emphasizes that the mind starts blank, but acquires knowledge as the outside world is impressed upon it. The doxographer Aetius summarizes this view as "When a man is born, the Stoics say, he has the commanding part of his soul like a sheet of paper ready for writing upon."
Islamic Golden Age and Pre-Renaissance (5th to 15th centuries CE)
During the Middle Ages (from the 5th to the 15th century CE) Aristotle's theory of tabula rasa was developed by Islamic philosophers starting with Al Farabi, developing into an elaborate theory by Avicenna (c. 980 – 1037 CE) and demonstrated as a thought experiment by Ibn Tufail. For Avicenna (Ibn Sina), for example, the tabula rasa is a pure potentiality that is actualized through education, and knowledge is attained through "empirical familiarity with objects in this world from which one abstracts universal concepts" developed through a "syllogistic method of reasoning in which observations lead to propositional statements which when compounded lead to further abstract concepts". The intellect itself develops from a material intellect (al-'aql al-hayulani), which is a potentiality "that can acquire knowledge to the active intellect (al-'aql al-fa'il), the state of the human intellect in conjunction with the perfect source of knowledge". So the immaterial "active intellect", separate from any individual person, is still essential for understanding to occur.
In the 12th century CE, the Andalusian Muslim philosopher and novelist Abu Bakr Ibn Tufail (known as "Abubacer" or "Ebu Tophail" in the West) included the theory of tabula rasa as a thought experiment in his Arabic philosophical novel, Hayy ibn Yaqdhan in which he depicted the development of the mind of a feral child "from a tabula rasa to that of an adult, in complete isolation from society" on a desert island, through experience alone. The Latin translation of his philosophical novel, entitled Philosophus Autodidactus, published by Edward Pococke the Younger in 1671, had an influence on John Locke's formulation of tabula rasa in An Essay Concerning Human Understanding.
A similar Islamic theological novel, Theologus Autodidactus, was written by the Arab theologian and physician Ibn al-Nafis in the 13th century. It also dealt with the theme of empiricism through the story of a feral child on a desert island, but departed from its predecessor by depicting the development of the protagonist's mind through contact with society rather than in isolation from society.
During the 13th century Thomas Aquinas adopted into scholasticism the Aristotelian position that the senses are essential to the mind. Bonaventure (1221–1274), one of Aquinas' strongest intellectual opponents, offered some of the strongest arguments in favour of the Platonic idea of the mind.
Renaissance Italy
In the late renaissance various writers began to question the medieval and classical understanding of knowledge acquisition in a more fundamental way. In political and historical writing Niccolò Machiavelli and his friend Francesco Guicciardini initiated a new realistic style of writing. Machiavelli in particular was scornful of writers on politics who judged everything in comparison to mental ideals and demanded that people should study the "effectual truth" instead. Their contemporary, Leonardo da Vinci (1452–1519) said, "If you find from your own experience that something is a fact and it contradicts what some authority has written down, then you must abandon the authority and base your reasoning on your own findings."
Significantly, an empirical metaphysical system was developed by the Italian philosopher Bernardino Telesio which had an enormous impact on the development of later Italian thinkers, including Telesio's students Antonio Persio and Sertorio Quattromani, his contemporaries Thomas Campanella and Giordano Bruno, and later British philosophers such as Francis Bacon, who regarded Telesio as "the first of the moderns". Telesio's influence can also be seen on the French philosophers René Descartes and Pierre Gassendi.
The decidedly anti-Aristotelian and anti-clerical music theorist Vincenzo Galilei (c. 1520 – 1591), father of Galileo and the inventor of monody, made use of the method in successfully solving musical problems, firstly, of tuning such as the relationship of pitch to string tension and mass in stringed instruments, and to volume of air in wind instruments; and secondly to composition, by his various suggestions to composers in his Dialogo della musica antica e moderna (Florence, 1581). The Italian word he used for "experiment" was esperimento. It is known that he was the essential pedagogical influence upon the young Galileo, his eldest son (cf. Coelho, ed. Music and Science in the Age of Galileo Galilei), arguably one of the most influential empiricists in history. Vincenzo, through his tuning research, found the underlying truth at the heart of the misunderstood myth of 'Pythagoras' hammers' (the square of the numbers concerned yielded those musical intervals, not the actual numbers, as believed), and through this and other discoveries that demonstrated the fallibility of traditional authorities, a radically empirical attitude developed, passed on to Galileo, which regarded "experience and demonstration" as the sine qua non of valid rational enquiry.
British empiricism
British empiricism, a retrospective characterization, emerged during the 17th century as an approach to early modern philosophy and modern science. Although both integral to this overarching transition, Francis Bacon, in England, first advocated for empiricism in 1620, whereas René Descartes, in France, laid the main groundwork upholding rationalism around 1640. (Bacon's natural philosophy was influenced by Italian philosopher Bernardino Telesio and by Swiss physician Paracelsus.) Contributing later in the 17th century, Thomas Hobbes and Baruch Spinoza are retrospectively identified likewise as an empiricist and a rationalist, respectively. In the Enlightenment of the late 17th century, John Locke in England, and in the 18th century, both George Berkeley in Ireland and David Hume in Scotland, all became leading exponents of empiricism, hence the dominance of empiricism in British philosophy. The distinction between rationalism and empiricism was not formally made until Immanuel Kant, in Germany, around 1780, who sought to merge the two views.
In response to the early-to-mid-17th-century "continental rationalism", John Locke (1632–1704) proposed in An Essay Concerning Human Understanding (1689) a very influential view wherein the only knowledge humans can have is a posteriori, i.e., based upon experience. Locke is famously attributed with holding the proposition that the human mind is a tabula rasa, a "blank tablet", in Locke's words "white paper", on which the experiences derived from sense impressions as a person's life proceeds are written.
There are two sources of our ideas: sensation and reflection. In both cases, a distinction is made between simple and complex ideas. The former are unanalysable, and are broken down into primary and secondary qualities. Primary qualities are essential for the object in question to be what it is. Without specific primary qualities, an object would not be what it is. For example, an apple is an apple because of the arrangement of its atomic structure. If an apple were structured differently, it would cease to be an apple. Secondary qualities are the sensory information we can perceive from its primary qualities. For example, an apple can be perceived in various colours, sizes, and textures but it is still identified as an apple. Therefore, its primary qualities dictate what the object essentially is, while its secondary qualities define its attributes. Complex ideas combine simple ones, and divide into substances, modes, and relations. According to Locke, our knowledge of things is a perception of ideas that are in accordance or discordance with each other, which is very different from the quest for certainty of Descartes.
A generation later, the Irish Anglican bishop George Berkeley (1685–1753) determined that Locke's view immediately opened a door that would lead to eventual atheism. In response to Locke, he put forth in his Treatise Concerning the Principles of Human Knowledge (1710) an important challenge to empiricism in which things only exist either as a result of their being perceived, or by virtue of the fact that they are an entity doing the perceiving. (For Berkeley, God fills in for humans by doing the perceiving whenever humans are not around to do it.) In his text Alciphron, Berkeley maintained that any order humans may see in nature is the language or handwriting of God. Berkeley's approach to empiricism would later come to be called subjective idealism.
Scottish philosopher David Hume (1711–1776) responded to Berkeley's criticisms of Locke, as well as other differences between early modern philosophers, and moved empiricism to a new level of skepticism. Hume argued in keeping with the empiricist view that all knowledge derives from sense experience, but he accepted that this has implications not normally acceptable to philosophers. He wrote for example, "Locke divides all arguments into demonstrative and probable. On this view, we must say that it is only probable that all men must die or that the sun will rise to-morrow, because neither of these can be demonstrated. But to conform our language more to common use, we ought to divide arguments into demonstrations, proofs, and probabilities—by ‘proofs’ meaning arguments from experience that leave no room for doubt or opposition." And,
Hume divided all of human knowledge into two categories: relations of ideas and matters of fact (see also Kant's analytic-synthetic distinction). Mathematical and logical propositions (e.g. "that the square of the hypotenuse is equal to the sum of the squares of the two sides") are examples of the first, while propositions involving some contingent observation of the world (e.g. "the sun rises in the East") are examples of the second. All of people's "ideas", in turn, are derived from their "impressions". For Hume, an "impression" corresponds roughly with what we call a sensation. To remember or to imagine such impressions is to have an "idea". Ideas are therefore the faint copies of sensations.
Hume maintained that no knowledge, even the most basic beliefs about the natural world, can be conclusively established by reason. Rather, he maintained, our beliefs are more a result of accumulated habits, developed in response to accumulated sense experiences. Among his many arguments Hume also added another important slant to the debate about scientific method—that of the problem of induction. Hume argued that it requires inductive reasoning to arrive at the premises for the principle of inductive reasoning, and therefore the justification for inductive reasoning is a circular argument. Among Hume's conclusions regarding the problem of induction is that there is no certainty that the future will resemble the past. Thus, as a simple instance posed by Hume, we cannot know with certainty by inductive reasoning that the sun will continue to rise in the East, but instead come to expect it to do so because it has repeatedly done so in the past.
Hume concluded that such things as belief in an external world and belief in the existence of the self were not rationally justifiable. According to Hume these beliefs were to be accepted nonetheless because of their profound basis in instinct and custom. Hume's lasting legacy, however, was the doubt that his skeptical arguments cast on the legitimacy of inductive reasoning, allowing many skeptics who followed to cast similar doubt.
Phenomenalism
Most of Hume's followers have disagreed with his conclusion that belief in an external world is rationally unjustifiable, contending that Hume's own principles implicitly contained the rational justification for such a belief, that is, beyond being content to let the issue rest on human instinct, custom and habit. According to an extreme empiricist theory known as phenomenalism, anticipated by the arguments of both Hume and George Berkeley, a physical object is a kind of construction out of our experiences.
Phenomenalism is the view that physical objects, properties, events (whatever is physical) are reducible to mental objects, properties, events. Ultimately, only mental objects, properties, events, exist—hence the closely related term subjective idealism. By the phenomenalistic line of thinking, to have a visual experience of a real physical thing is to have an experience of a certain kind of group of experiences. This type of set of experiences possesses a constancy and coherence that is lacking in the set of experiences of which hallucinations, for example, are a part. As John Stuart Mill put it in the mid-19th century, matter is the "permanent possibility of sensation".
Mill's empiricism went a significant step beyond Hume in still another respect: in maintaining that induction is necessary for all meaningful knowledge including mathematics. As summarized by D.W. Hamlin:
Mill's empiricism thus held that knowledge of any kind is not from direct experience but an inductive inference from direct experience. The problems other philosophers have had with Mill's position center around the following issues: Firstly, Mill's formulation encounters difficulty when it describes what direct experience is by differentiating only between actual and possible sensations. This misses some key discussion concerning conditions under which such "groups of permanent possibilities of sensation" might exist in the first place. Berkeley put God in that gap; the phenomenalists, including Mill, essentially left the question unanswered.
In the end, lacking an acknowledgement of an aspect of "reality" that goes beyond mere "possibilities of sensation", such a position leads to a version of subjective idealism. Questions of how floor beams continue to support a floor while unobserved, how trees continue to grow while unobserved and untouched by human hands, etc., remain unanswered, and perhaps unanswerable in these terms. Secondly, Mill's formulation leaves open the unsettling possibility that the "gap-filling entities are purely possibilities and not actualities at all". Thirdly, Mill's position, by calling mathematics merely another species of inductive inference, misapprehends mathematics. It fails to fully consider the structure and method of mathematical science, the products of which are arrived at through an internally consistent deductive set of procedures which do not, either today or at the time Mill wrote, fall under the agreed meaning of induction.
The phenomenalist phase of post-Humean empiricism ended by the 1940s, for by that time it had become obvious that statements about physical things could not be translated into statements about actual and possible sense data. If a physical object statement is to be translatable into a sense-data statement, the former must be at least deducible from the latter. But it came to be realized that there is no finite set of statements about actual and possible sense-data from which we can deduce even a single physical-object statement. The translating or paraphrasing statement must be couched in terms of normal observers in normal conditions of observation.
There is, however, no finite set of statements that are couched in purely sensory terms and can express the satisfaction of the condition of the presence of a normal observer. According to phenomenalism, to say that a normal observer is present is to make the hypothetical statement that were a doctor to inspect the observer, the observer would appear to the doctor to be normal. But, of course, the doctor himself must be a normal observer. If we are to specify this doctor's normality in sensory terms, we must make reference to a second doctor who, when inspecting the sense organs of the first doctor, would himself have to have the sense data a normal observer has when inspecting the sense organs of a subject who is a normal observer. And if we are to specify in sensory terms that the second doctor is a normal observer, we must refer to a third doctor, and so on (also see the third man).
Logical empiricism
Logical empiricism (also logical positivism or neopositivism) was an early 20th-century attempt to synthesize the essential ideas of British empiricism (e.g. a strong emphasis on sensory experience as the basis for knowledge) with certain insights from mathematical logic that had been developed by Gottlob Frege and Ludwig Wittgenstein. Some of the key figures in this movement were Otto Neurath, Moritz Schlick and the rest of the Vienna Circle, along with A. J. Ayer, Rudolf Carnap and Hans Reichenbach.
The neopositivists subscribed to a notion of philosophy as the conceptual clarification of the methods, insights and discoveries of the sciences. They saw in the logical symbolism elaborated by Frege (1848–1925) and Bertrand Russell (1872–1970) a powerful instrument that could rationally reconstruct all scientific discourse into an ideal, logically perfect, language that would be free of the ambiguities and deformations of natural language. This gave rise to what they saw as metaphysical pseudoproblems and other conceptual confusions. By combining Frege's thesis that all mathematical truths are logical with the early Wittgenstein's idea that all logical truths are mere linguistic tautologies, they arrived at a twofold classification of all propositions: the "analytic" (a priori) and the "synthetic" (a posteriori). On this basis, they formulated a strong principle of demarcation between sentences that have sense and those that do not: the so-called "verification principle". Any sentence that is not purely logical, or is unverifiable, is devoid of meaning. As a result, most metaphysical, ethical, aesthetic and other traditional philosophical problems came to be considered pseudoproblems.
In the extreme empiricism of the neopositivists—at least before the 1930s—any genuinely synthetic assertion must be reducible to an ultimate assertion (or set of ultimate assertions) that expresses direct observations or perceptions. In later years, Carnap and Neurath abandoned this sort of phenomenalism in favor of a rational reconstruction of knowledge into the language of an objective spatio-temporal physics. That is, instead of translating sentences about physical objects into sense-data, such sentences were to be translated into so-called protocol sentences, for example, "X at location Y and at time T observes such and such". The central theses of logical positivism (verificationism, the analytic–synthetic distinction, reductionism, etc.) came under sharp attack after World War II by thinkers such as Nelson Goodman, W. V. Quine, Hilary Putnam, Karl Popper, and Richard Rorty. By the late 1960s, it had become evident to most philosophers that the movement had pretty much run its course, though its influence is still significant among contemporary analytic philosophers such as Michael Dummett and other anti-realists.
Pragmatism
In the late 19th and early 20th century, several forms of pragmatic philosophy arose. The ideas of pragmatism, in its various forms, developed mainly from discussions between Charles Sanders Peirce and William James when both men were at Harvard in the 1870s. James popularized the term "pragmatism", giving Peirce full credit for its patrimony, but Peirce later demurred from the tangents that the movement was taking, and redubbed what he regarded as the original idea with the name of "pragmaticism". Along with its pragmatic theory of truth, this perspective integrates the basic insights of empirical (experience-based) and rational (concept-based) thinking.
Charles Peirce (1839–1914) was highly influential in laying the groundwork for today's empirical scientific method. Although Peirce severely criticized many elements of Descartes' peculiar brand of rationalism, he did not reject rationalism outright. Indeed, he concurred with the main ideas of rationalism, most importantly the idea that rational concepts can be meaningful and the idea that rational concepts necessarily go beyond the data given by empirical observation. In later years he even emphasized the concept-driven side of the then ongoing debate between strict empiricism and strict rationalism, in part to counterbalance the excesses to which some of his cohorts had taken pragmatism under the "data-driven" strict-empiricist view.
Among Peirce's major contributions was to place inductive reasoning and deductive reasoning in a complementary rather than competitive mode, the latter of which had been the primary trend among the educated since David Hume wrote a century before. To this, Peirce added the concept of abductive reasoning. The combined three forms of reasoning serve as a primary conceptual foundation for the empirically based scientific method today. Peirce's approach "presupposes that (1) the objects of knowledge are real things, (2) the characters (properties) of real things do not depend on our perceptions of them, and (3) everyone who has sufficient experience of real things will agree on the truth about them. According to Peirce's doctrine of fallibilism, the conclusions of science are always tentative. The rationality of the scientific method does not depend on the certainty of its conclusions, but on its self-corrective character: by continued application of the method science can detect and correct its own mistakes, and thus eventually lead to the discovery of truth".
In his Harvard "Lectures on Pragmatism" (1903), Peirce enumerated what he called the "three cotary propositions of pragmatism" (L: cos, cotis whetstone), saying that they "put the edge on the maxim of pragmatism". First among these, he listed the peripatetic-thomist observation mentioned above, but he further observed that this link between sensory perception and intellectual conception is a two-way street. That is, it can be taken to say that whatever we find in the intellect is also incipiently in the senses. Hence, if theories are theory-laden then so are the senses, and perception itself can be seen as a species of abductive inference, its difference being that it is beyond control and hence beyond critique—in a word, incorrigible. This in no way conflicts with the fallibility and revisability of scientific concepts, since it is only the immediate percept in its unique individuality or "thisness"—what the Scholastics called its haecceity—that stands beyond control and correction. Scientific concepts, on the other hand, are general in nature, and transient sensations do in another sense find correction within them. This notion of perception as abduction has received periodic revivals in artificial intelligence and cognitive science research, most recently for instance with the work of Irvin Rock on indirect perception.
Around the beginning of the 20th century, William James (1842–1910) coined the term "radical empiricism" to describe an offshoot of his form of pragmatism, which he argued could be dealt with separately from his pragmatism—though in fact the two concepts are intertwined in James's published lectures. James maintained that the empirically observed "directly apprehended universe needs ... no extraneous trans-empirical connective support", by which he meant to rule out the perception that there can be any value added by seeking supernatural explanations for natural phenomena. James' "radical empiricism" is thus not radical in the context of the term "empiricism", but is instead fairly consistent with the modern use of the term "empirical". His method of argument in arriving at this view, however, still readily encounters debate within philosophy even today.
John Dewey (1859–1952) modified James' pragmatism to form a theory known as instrumentalism. The role of sense experience in Dewey's theory is crucial, in that he saw experience as unified totality of things through which everything else is interrelated. Dewey's basic thought, in accordance with empiricism, was that reality is determined by past experience. Therefore, humans adapt their past experiences of things to perform experiments upon and test the pragmatic values of such experience. The value of such experience is measured experientially and scientifically, and the results of such tests generate ideas that serve as instruments for future experimentation, in physical sciences as in ethics. Thus, ideas in Dewey's system retain their empiricist flavour in that they are only known a posteriori.
See also
Endnotes
References
Achinstein, Peter, and Barker, Stephen F. (1969), The Legacy of Logical Positivism: Studies in the Philosophy of Science, Johns Hopkins University Press, Baltimore, MD.
Aristotle, "On the Soul" (De Anima), W. S. Hett (trans.), pp. 1–203 in Aristotle, Volume 8, Loeb Classical Library, William Heinemann, London, UK, 1936.
Aristotle, Posterior Analytics.
Barone, Francesco (1986), Il neopositivismo logico, Laterza, Roma Bari
Berlin, Isaiah (2004), The Refutation of Phenomenalism, Isaiah Berlin Virtual Library.
Bolender, John (1998), "Factual Phenomenalism: A Supervenience Theory"', Sorites, no. 9, pp. 16–31.
Chisolm, R. (1948), "The Problem of Empiricism", Journal of Philosophy 45, 512–17.
Dewey, John (1906), Studies in Logical Theory.
Encyclopædia Britannica, "Empiricism", vol. 4, p. 480.
Hume, D., A Treatise of Human Nature, L.A. Selby-Bigge (ed.), Oxford University Press, London, UK, 1975.
Hume, David. "An Enquiry Concerning Human Understanding", in Enquiries Concerning the Human Understanding and Concerning the Principles of Morals, 2nd edition, L.A. Selby-Bigge (ed.), Oxford University Press, Oxford, UK, 1902. Gutenberg press full-text
James, William (1911), The Meaning of Truth.
Keeton, Morris T. (1962), "Empiricism", pp. 89–90 in Dagobert D. Runes (ed.), Dictionary of Philosophy, Littlefield, Adams, and Company, Totowa, NJ.
Leftow, Brian (ed., 2006), Aquinas: Summa Theologiae, Questions on God, pp. vii et seq.
Macmillan Encyclopedia of Philosophy (1969), "Development of Aristotle's Thought", vol. 1, pp. 153ff.
Macmillan Encyclopedia of Philosophy (1969), "George Berkeley", vol. 1, p. 297.
Macmillan Encyclopedia of Philosophy (1969), "Empiricism", vol. 2, p. 503.
Macmillan Encyclopedia of Philosophy (1969), "Mathematics, Foundations of", vol. 5, pp. 188–89.
Macmillan Encyclopedia of Philosophy (1969), "Axiomatic Method", vol. 5, pp. 192ff.
Macmillan Encyclopedia of Philosophy (1969), "Epistemological Discussion", subsections on "A Priori Knowledge" and "Axioms".
Macmillan Encyclopedia of Philosophy (1969), "Phenomenalism", vol. 6, p. 131.
Macmillan Encyclopedia of Philosophy (1969), "Thomas Aquinas", subsection on "Theory of Knowledge", vol. 8, pp. 106–07.
Marconi, Diego (2004), "Fenomenismo"', in Gianni Vattimo and Gaetano Chiurazzi (eds.), L'Enciclopedia Garzanti di Filosofia, 3rd edition, Garzanti, Milan, Italy.
Markie, P. (2004), "Rationalism vs. Empiricism" in Edward D. Zalta (ed.), Stanford Encyclopedia of Philosophy, Eprint.
Maxwell, Nicholas (1998), The Comprehensibility of the Universe: A New Conception of Science, Oxford University Press, Oxford.
Mill, J.S., "An Examination of Sir William Rowan Hamilton's Philosophy", in A.J. Ayer and Ramond Winch (eds.), British Empirical Philosophers, Simon and Schuster, New York, NY, 1968.
Morick, H. (1980), Challenges to Empiricism, Hackett Publishing, Indianapolis, IN.
Peirce, C.S., "Lectures on Pragmatism", Cambridge, Massachusetts, March 26 – May 17, 1903. Reprinted in part, Collected Papers, CP 5.14–212. Published in full with editor's introduction and commentary, Patricia Ann Turisi (ed.), Pragmatism as a Principle and Method of Right Thinking: The 1903 Harvard "Lectures on Pragmatism", State University of New York Press, Albany, NY, 1997. Reprinted, pp. 133–241, Peirce Edition Project (eds.), The Essential Peirce, Selected Philosophical Writings, Volume 2 (1893–1913), Indiana University Press, Bloomington, IN, 1998.
Rescher, Nicholas (1985), The Heritage of Logical Positivism, University Press of America, Lanham, MD.
Rock, Irvin (1983), The Logic of Perception, MIT Press, Cambridge, Massachusetts.
Rock, Irvin, (1997) Indirect Perception, MIT Press, Cambridge, Massachusetts.
Runes, D.D. (ed., 1962), Dictionary of Philosophy, Littlefield, Adams, and Company, Totowa, NJ.
Sini, Carlo (2004), "Empirismo", in Gianni Vattimo et al. (eds.), Enciclopedia Garzanti della Filosofia.
Solomon, Robert C., and Higgins, Kathleen M. (1996), A Short History of Philosophy, pp. 68–74.
Sorabji, Richard (1972), Aristotle on Memory.
Thornton, Stephen (1987), Berkeley's Theory of Reality, Eprint
Vanzo, Alberto (2014), "From Empirics to Empiricists", Intellectual History Review, 2014, Eprint available here and here.
Ward, Teddy (n.d.), "Empiricism", Eprint.
Wilson, Fred (2005), "John Stuart Mill", in Edward N. Zalta (ed.), Stanford Encyclopedia of Philosophy, Eprint.
External links
Empiricist Man
History of science
Justification (epistemology)
Philosophical methodology
Internalism and externalism
Philosophy of science
Epistemological schools and traditions | 0.77406 | 0.999029 | 0.773309 |
Hydrazone | Hydrazones are a class of organic compounds with the structure . They are related to ketones and aldehydes by the replacement of the oxygen =O with the = functional group. They are formed usually by the action of hydrazine on ketones or aldehydes.
Synthesis
Hydrazine, organohydrazines, and 1,1-diorganohydrazines react with aldehydes and ketones to give hydrazones.
Phenylhydrazine reacts with reducing sugars to form hydrazones known as osazones, which was developed by German chemist Emil Fischer as a test to differentiate monosaccharides.
Uses
Hydrazones are the basis for various analyses of ketones and aldehydes. For example, dinitrophenylhydrazine coated onto a silica sorbent is the basis of an adsorption cartridge. The hydrazones are then eluted and analyzed by high-performance liquid chromatography (HPLC) using a UV detector.
The compound carbonyl cyanide-p-trifluoromethoxyphenylhydrazone (abbreviated as FCCP) is used to uncouple ATP synthesis and reduction of oxygen in oxidative phosphorylation in molecular biology.
Hydrazones are the basis of bioconjugation strategies. Hydrazone-based coupling methods are used in medical biotechnology to couple drugs to targeted antibodies (see ADC), e.g. antibodies against a certain type of cancer cell. The hydrazone-based bond is stable at neutral pH (in the blood), but is rapidly destroyed in the acidic environment of lysosomes of the cell. The drug is thereby released in the cell, where it exerts its function.
Reactions
Hydrazones are susceptible to hydrolysis:
Alkyl hydrazones are 102- to 103-fold more sensitive to hydrolysis than analogous oximes.
When derived from hydrazine itself, hydrazones condense with a second equivalent of a carbonyl to give azines:
Hydrazones are intermediates in the Wolff–Kishner reduction.
Hydrazones are reactants in hydrazone iodination, the Shapiro reaction, and the Bamford–Stevens reaction to vinyl compounds. Hydrazones can also be synthesized by the Japp–Klingemann reaction via β-keto acids or β-keto-esters and aryl diazonium salts. Hydrazones are converted to azines when used in the preparation of 3,5-disubstituted 1H-pyrazoles, a reaction also well known using hydrazine hydrate. With a transition metal catalyst, hydrazones can serve as organometallic reagent surrogates to react with various electrophiles.
N,N-dialkylhydrazones
In N,N-dialkylhydrazones the C=N bond can be hydrolysed, oxidised and reduced, the N–N bond can be reduced to the free amine. The carbon atom of the C=N bond can react with organometallic nucleophiles. The alpha-hydrogen atom is more acidic by 10 orders of magnitude compared to the ketone and therefore more nucleophilic. Deprotonation with for instance lithium diisopropylamide (LDA) gives an azaenolate which can be alkylated by alkyl halides. The hydrazines SAMP and RAMP function as chiral auxiliary.
Recovery of carbonyl compounds from N,N-dialkylhydrazones
Several methods are known to recover carbonyl compounds from N,N-dialkylhydrazones. Procedures include oxidative, hydrolytic or reductive cleavage conditions and can be compatible with a wide range of functional groups.
Gallery
See also
Azo compound
Imine
Nitrosamine
Hydrogenation of carbon–nitrogen double bonds
References
Functional groups | 0.786927 | 0.982691 | 0.773306 |
Protein structure | Protein structure is the three-dimensional arrangement of atoms in an amino acid-chain molecule. Proteins are polymers specifically polypeptides formed from sequences of amino acids, which are the monomers of the polymer. A single amino acid monomer may also be called a residue, which indicates a repeating unit of a polymer. Proteins form by amino acids undergoing condensation reactions, in which the amino acids lose one water molecule per reaction in order to attach to one another with a peptide bond. By convention, a chain under 30 amino acids is often identified as a peptide, rather than a protein. To be able to perform their biological function, proteins fold into one or more specific spatial conformations driven by a number of non-covalent interactions, such as hydrogen bonding, ionic interactions, Van der Waals forces, and hydrophobic packing. To understand the functions of proteins at a molecular level, it is often necessary to determine their three-dimensional structure. This is the topic of the scientific field of structural biology, which employs techniques such as X-ray crystallography, NMR spectroscopy, cryo-electron microscopy (cryo-EM) and dual polarisation interferometry, to determine the structure of proteins.
Protein structures range in size from tens to several thousand amino acids. By physical size, proteins are classified as nanoparticles, between 1–100 nm. Very large protein complexes can be formed from protein subunits. For example, many thousands of actin molecules assemble into a microfilament.
A protein usually undergoes reversible structural changes in performing its biological function. The alternative structures of the same protein are referred to as different conformations, and transitions between them are called conformational changes.
Levels of protein structure
There are four distinct levels of protein structure.
Primary structure
The primary structure of a protein refers to the sequence of amino acids in the polypeptide chain. The primary structure is held together by peptide bonds that are made during the process of protein biosynthesis. The two ends of the polypeptide chain are referred to as the carboxyl terminus (C-terminus) and the amino terminus (N-terminus) based on the nature of the free group on each extremity. Counting of residues always starts at the N-terminal end (NH2-group), which is the end where the amino group is not involved in a peptide bond. The primary structure of a protein is determined by the gene corresponding to the protein. A specific sequence of nucleotides in DNA is transcribed into mRNA, which is read by the ribosome in a process called translation. The sequence of amino acids in insulin was discovered by Frederick Sanger, establishing that proteins have defining amino acid sequences. The sequence of a protein is unique to that protein, and defines the structure and function of the protein. The sequence of a protein can be determined by methods such as Edman degradation or tandem mass spectrometry. Often, however, it is read directly from the sequence of the gene using the genetic code. It is strictly recommended to use the words "amino acid residues" when discussing proteins because when a peptide bond is formed, a water molecule is lost, and therefore proteins are made up of amino acid residues. Post-translational modifications such as phosphorylations and glycosylations are usually also considered a part of the primary structure, and cannot be read from the gene. For example, insulin is composed of 51 amino acids in 2 chains. One chain has 31 amino acids, and the other has 20 amino acids.
Secondary structure
Secondary structure refers to highly regular local sub-structures on the actual polypeptide backbone chain. Two main types of secondary structure, the α-helix and the β-strand or β-sheets, were suggested in 1951 by Linus Pauling. These secondary structures are defined by patterns of hydrogen bonds between the main-chain peptide groups. They have a regular geometry, being constrained to specific values of the dihedral angles ψ and φ on the Ramachandran plot. Both the α-helix and the β-sheet represent a way of saturating all the hydrogen bond donors and acceptors in the peptide backbone. Some parts of the protein are ordered but do not form any regular structures. They should not be confused with random coil, an unfolded polypeptide chain lacking any fixed three-dimensional structure. Several sequential secondary structures may form a "supersecondary unit".
Tertiary structure
Tertiary structure refers to the three-dimensional structure created by a single protein molecule (a single polypeptide chain). It may include one or several domains. The α-helices and β-pleated-sheets are folded into a compact globular structure. The folding is driven by the non-specific hydrophobic interactions, the burial of hydrophobic residues from water, but the structure is stable only when the parts of a protein domain are locked into place by specific tertiary interactions, such as salt bridges, hydrogen bonds, and the tight packing of side chains and disulfide bonds. The disulfide bonds are extremely rare in cytosolic proteins, since the cytosol (intracellular fluid) is generally a reducing environment.
Quaternary structure
Quaternary structure is the three-dimensional structure consisting of the aggregation of two or more individual polypeptide chains (subunits) that operate as a single functional unit (multimer). The resulting multimer is stabilized by the same non-covalent interactions and disulfide bonds as in tertiary structure. There are many possible quaternary structure organisations. Complexes of two or more polypeptides (i.e. multiple subunits) are called multimers. Specifically it would be called a dimer if it contains two subunits, a trimer if it contains three subunits, a tetramer if it contains four subunits, and a pentamer if it contains five subunits, and so forth. The subunits are frequently related to one another by symmetry operations, such as a 2-fold axis in a dimer. Multimers made up of identical subunits are referred to with a prefix of "homo-" and those made up of different subunits are referred to with a prefix of "hetero-", for example, a heterotetramer, such as the two alpha and two beta chains of hemoglobin.
Homomers
An assemblage of multiple copies of a particular polypeptide chain can be described as a homomer, multimer or oligomer. Bertolini et al. in 2021 presented evidence that homomer formation may be driven by interaction between nascent polypeptide chains as they are translated from mRNA by nearby adjacent ribosomes. Hundreds of proteins have been identified as being assembled into homomers in human cells. The process of assembly is often initiated by the interaction of the N-terminal region of polypeptide chains. Evidence that numerous gene products form homomers (multimers) in a variety of organisms based on intragenic complementation evidence was reviewed in 1965.
Domains, motifs, and folds in protein structure
Proteins are frequently described as consisting of several structural units. These units include domains, motifs, and folds. Despite the fact that there are about 100,000 different proteins expressed in eukaryotic systems, there are many fewer different domains, structural motifs and folds.
Structural domain
A structural domain is an element of the protein's overall structure that is self-stabilizing and often folds independently of the rest of the protein chain. Many domains are not unique to the protein products of one gene or one gene family but instead appear in a variety of proteins. Domains often are named and singled out because they figure prominently in the biological function of the protein they belong to; for example, the "calcium-binding domain of calmodulin". Because they are independently stable, domains can be "swapped" by genetic engineering between one protein and another to make chimera proteins. A conservative combination of several domains that occur in different proteins, such as protein tyrosine phosphatase domain and C2 domain pair, was called "a superdomain" that may evolve as a single unit.
Structural and sequence motifs
The structural and sequence motifs refer to short segments of protein three-dimensional structure or amino acid sequence that were found in a large number of different proteins
Supersecondary structure
Tertiary protein structures can have multiple secondary elements on the same polypeptide chain. The supersecondary structure refers to a specific combination of secondary structure elements, such as β-α-β units or a helix-turn-helix motif. Some of them may be also referred to as structural motifs.
Protein fold
A protein fold refers to the general protein architecture, like a helix bundle, β-barrel, Rossmann fold or different "folds" provided in the Structural Classification of Proteins database. A related concept is protein topology.
Protein dynamics and conformational ensembles
Proteins are not static objects, but rather populate ensembles of conformational states. Transitions between these states typically occur on nanoscales, and have been linked to functionally relevant phenomena such as allosteric signaling and enzyme catalysis. Protein dynamics and conformational changes allow proteins to function as nanoscale biological machines within cells, often in the form of multi-protein complexes. Examples include motor proteins, such as myosin, which is responsible for muscle contraction, kinesin, which moves cargo inside cells away from the nucleus along microtubules, and dynein, which moves cargo inside cells towards the nucleus and produces the axonemal beating of motile cilia and flagella. "[I]n effect, the [motile cilium] is a nanomachine composed of perhaps over 600 proteins in molecular complexes, many of which also function independently as nanomachines...Flexible linkers allow the mobile protein domains connected by them to recruit their binding partners and induce long-range allostery via protein domain dynamics. "
Proteins are often thought of as relatively stable tertiary structures that experience conformational changes after being affected by interactions with other proteins or as a part of enzymatic activity. However, proteins may have varying degrees of stability, and some of the less stable variants are intrinsically disordered proteins. These proteins exist and function in a relatively 'disordered' state lacking a stable tertiary structure. As a result, they are difficult to describe by a single fixed tertiary structure. Conformational ensembles have been devised as a way to provide a more accurate and 'dynamic' representation of the conformational state of intrinsically disordered proteins.
Protein ensemble files are a representation of a protein that can be considered to have a flexible structure. Creating these files requires determining which of the various theoretically possible protein conformations actually exist. One approach is to apply computational algorithms to the protein data in order to try to determine the most likely set of conformations for an ensemble file. There are multiple methods for preparing data for the Protein Ensemble Database that fall into two general methodologies – pool and molecular dynamics (MD) approaches (diagrammed in the figure). The pool based approach uses the protein's amino acid sequence to create a massive pool of random conformations. This pool is then subjected to more computational processing that creates a set of theoretical parameters for each conformation based on the structure. Conformational subsets from this pool whose average theoretical parameters closely match known experimental data for this protein are selected. The alternative molecular dynamics approach takes multiple random conformations at a time and subjects all of them to experimental data. Here the experimental data is serving as limitations to be placed on the conformations (e.g. known distances between atoms). Only conformations that manage to remain within the limits set by the experimental data are accepted. This approach often applies large amounts of experimental data to the conformations which is a very computationally demanding task.
The conformational ensembles were generated for a number of highly dynamic and partially unfolded proteins, such as Sic1/Cdc4, p15 PAF, MKK7, Beta-synuclein and P27
Protein folding
As it is translated, polypeptides exit the ribosome mostly as a random coil and folds into its native state. The final structure of the protein chain is generally assumed to be determined by its amino acid sequence (Anfinsen's dogma).
Protein stability
Thermodynamic stability of proteins represents the free energy difference between the folded and unfolded protein states. This free energy difference is very sensitive to temperature, hence a change in temperature may result in unfolding or denaturation. Protein denaturation may result in loss of function, and loss of native state. The free energy of stabilization of soluble globular proteins typically does not exceed 50 kJ/mol. Taking into consideration the large number of hydrogen bonds that take place for the stabilization of secondary structures, and the stabilization of the inner core through hydrophobic interactions, the free energy of stabilization emerges as small difference between large numbers.
Protein structure determination
Around 90% of the protein structures available in the Protein Data Bank have been determined by X-ray crystallography. This method allows one to measure the three-dimensional (3-D) density distribution of electrons in the protein, in the crystallized state, and thereby infer the 3-D coordinates of all the atoms to be determined to a certain resolution. Roughly 7% of the known protein structures have been obtained by nuclear magnetic resonance (NMR) techniques. For larger protein complexes, cryo-electron microscopy can determine protein structures. The resolution is typically lower than that of X-ray crystallography, or NMR, but the maximum resolution is steadily increasing. This technique is still a particularly valuable for very large protein complexes such as virus coat proteins and amyloid fibers.
General secondary structure composition can be determined via circular dichroism. Vibrational spectroscopy can also be used to characterize the conformation of peptides, polypeptides, and proteins. Two-dimensional infrared spectroscopy has become a valuable method to investigate the structures of flexible peptides and proteins that cannot be studied with other methods. A more qualitative picture of protein structure is often obtained by proteolysis, which is also useful to screen for more crystallizable protein samples. Novel implementations of this approach, including fast parallel proteolysis (FASTpp), can probe the structured fraction and its stability without the need for purification. Once a protein's structure has been experimentally determined, further detailed studies can be done computationally, using molecular dynamic simulations of that structure.
Protein structure databases
A protein structure database is a database that is modeled around the various experimentally determined protein structures. The aim of most protein structure databases is to organize and annotate the protein structures, providing the biological community access to the experimental data in a useful way. Data included in protein structure databases often includes 3D coordinates as well as experimental information, such as unit cell dimensions and angles for x-ray crystallography determined structures. Though most instances, in this case either proteins or a specific structure determinations of a protein, also contain sequence information and some databases even provide means for performing sequence based queries, the primary attribute of a structure database is structural information, whereas sequence databases focus on sequence information, and contain no structural information for the majority of entries. Protein structure databases are critical for many efforts in computational biology such as structure based drug design, both in developing the computational methods used and in providing a large experimental dataset used by some methods to provide insights about the function of a protein.
Structural classifications of proteins
Protein structures can be grouped based on their structural similarity, topological class or a common evolutionary origin. The Structural Classification of Proteins database and CATH database provide two different structural classifications of proteins. When the structural similarity is large the two proteins have possibly diverged from a common ancestor, and shared structure between proteins is considered evidence of homology. Structure similarity can then be used to group proteins together into protein superfamilies. If shared structure is significant but the fraction shared is small, the fragment shared may be the consequence of a more dramatic evolutionary event such as horizontal gene transfer, and joining proteins sharing these fragments into protein superfamilies is no longer justified. Topology of a protein can be used to classify proteins as well. Knot theory and circuit topology are two topology frameworks developed for classification of protein folds based on chain crossing and intrachain contacts respectively.
Computational prediction of protein structure
The generation of a protein sequence is much easier than the determination of a protein structure. However, the structure of a protein gives much more insight in the function of the protein than its sequence. Therefore, a number of methods for the computational prediction of protein structure from its sequence have been developed. Ab initio prediction methods use just the sequence of the protein. Threading and homology modeling methods can build a 3-D model for a protein of unknown structure from experimental structures of evolutionarily-related proteins, called a protein family.
See also
Biomolecular structure
Gene structure
Nucleic acid structure
PCRPi-DB
Ribbon diagram 3D schematic representation of proteins
References
Further reading
50 Years of Protein Structure Determination Timeline - HTML Version - National Institute of General Medical Sciences at NIH
External links
Protein Structure drugdesign.org | 0.777683 | 0.994308 | 0.773256 |
Chemoproteomics | Chemoproteomics (also known as chemical proteomics) entails a broad array of techniques used to identify and interrogate protein-small molecule interactions. Chemoproteomics complements phenotypic drug discovery, a paradigm that aims to discover lead compounds on the basis of alleviating a disease phenotype, as opposed to target-based drug discovery (reverse pharmacology), in which lead compounds are designed to interact with predetermined disease-driving biological targets. As phenotypic drug discovery assays do not provide confirmation of a compound's mechanism of action, chemoproteomics provides valuable follow-up strategies to narrow down potential targets and eventually validate a molecule's mechanism of action. Chemoproteomics also attempts to address the inherent challenge of drug promiscuity in small molecule drug discovery by analyzing protein-small molecule interactions on a proteome-wide scale. A major goal of chemoproteomics is to characterize the interactome of drug candidates to gain insight into mechanisms of off-target toxicity and polypharmacology.
Chemoproteomics assays can be stratified into three basic types. Solution-based approaches involve the use of drug analogs that chemically modify target proteins in solution, tagging them for identification. Immobilization-based approaches seek to isolate potential targets or ligands by anchoring their binding partners to an immobile support. Derivatization-free approaches aim to infer drug-target interactions by observing changes in protein stability or drug chromatography upon binding. Computational techniques complement the chemoproteomic toolkit as parallel lines of evidence supporting potential drug-target pairs, and are used to generate structural models that inform lead optimization. Several targets of high profile drugs have been identified using chemoproteomics, and the continued improvement of mass spectrometer sensitivity and chemical probe technology indicates that chemoproteomics will play a large role in future drug discovery.
Background
Context
The conclusion of the Human Genome Project was followed with hope for a new paradigm in treating disease. Many fatal and intractable diseases were able to be mapped to specific genes, providing a starting point to better understand the roles of their protein products in illness. Drug discovery has made use of animal knock-out models that highlight the impact of a protein's absence, particularly in the development of disease, and medicinal chemists have leveraged computational chemistry to generate high affinity compounds against disease-causing proteins. Yet FDA drug approval rates have been on the decline over the last decade. One potential source of drug failure is the disconnect between early and late drug discovery. Early drug discovery focuses on genetic validation of a target, which is a strong predictor of success, but knock-out and overexpression systems are simplistic. Spatially and temporally conditional knock-out/knock-in systems have improved the level of nuance in in vivo analysis of protein function, but still fail to completely parallel the systemic breadth of pharmacological action. For example, drugs often act through multiple mechanisms, and often work best by engaging targets partially. Chemoproteomic tools offer a solution to bridge the gap between a genetic understanding of disease and a pharmacological understanding of drug action by identifying the many proteins involved in therapeutic success.
Basic tools
The chemoproteomic toolkit is anchored by liquid chromatography-tandem mass spectrometry (LC-MS/MS or LC-MS) based quantitative proteomics, which allows for the near complete identification and relative quantification of complex proteomes in biological samples. In addition to proteomic analysis, the detection of post-translational modifications, like phosphorylation, glycosylation, acetylation, and recently ubiquitination, which give insight into the functional state of a cell, is also possible. The vast majority of proteomic studies are analyzed using high-resolution orbitrap mass spectrometers and samples are processed using a generalizable workflow. A standard procedure begins with sample lysis, in which proteins are extracted into a denaturing buffer containing salts, an agent that reduces disulfide bonds, such as dithiothreitol, and an alkylating agent that caps thiol groups, such as iodoacetamide. Denatured proteins are proteolysed, often with trypsin, and then separated from other mixture components prior to analysis via LC-MS/MS. For more accurate quantification, different samples can be reacted with isobaric tandem mass tags (TMTs), a form of chemical barcode that allows for sample multiplexing, and then pooled.
Solution-based approaches
Broad proteomic and transcriptomic profiling has led to innumerable advances in the biomedical space, but the characterization of RNA and protein expression is limited in its ability to inform on the functional characteristics of proteins. Given that transcript and protein expression information leave gaps in knowledge surrounding the effects of post-translational modifications and protein-protein interactions on enzyme activity, and that enzyme activity varies across cell types, disease states, and physiological conditions, specialized tools are required to profile enzyme activity across contexts. Additionally, many identified enzymes have not been sufficiently characterized to yield actionable mechanisms on which to base functional assays. Without a basis for a functional biochemical readout, chemical tools are required to detect drug-protein interactions.
Activity-based protein profiling
Activity-based protein profiling (ABPP, also activity-based proteomics) is a technique that was developed to monitor the availability of enzymatic active sites to their endogenous ligands. ABPP uses specially designed probes that enter and form a covalent bond with an enzyme's active site, which confirms that the enzyme is an active state. The probe is typically an analog of the drug whose mechanism is being studied, so covalent labeling of an enzyme is indicative of drug binding. ABPP probes are designed with three key functional units: (1) a site-directed covalent warhead (reactive group); (2) a reporter tag, such as biotin or rhodamine; and (3) a linker group. The site-directed covalent warhead, also called a covalent modifier, is an electrophile that covalently modifies a serine, cysteine, or lysine residue in the enzyme's active site and prevents future interactions with other ligands. ABPP probes are generally designed against enzymatic classes, and thus can provide systems-level information about the impact of cell state on enzymatic networks. The reporter tag is used to confirm labeling of the enzyme with the reactive group and can vary depending on the downstream readout. The most widely used reporters are fluorescent moieties that enable imaging and affinity tags, such as biotin, that allow for pull-down of labeled enzymes and analysis via mass spectrometry. There are drawbacks to each strategy, namely that fluorescent reporters do not allow for enrichment for proteomic analysis, while biotin-based affinity tags co-purify with endogenously biotinylated proteins. A linker group is used to connect the reactive group to the reporter, ideally in a manner that does not alter the activity of probe. The most common linker groups are long alkyl chains, derivatized PEGs, and modified polypeptides.
Under the assumption that enzymes vary in their structure, function, and associations depending on a system's physiological or developmental state, it can be inferred that the accessibility of an enzyme's active site will also vary. Therefore, the ability of an ABPP probe to label an enzyme will also vary across conditions. Thus, the binding of a probe can reveal information around an enzyme's functional characteristics in different contexts. High-throughput screening has benefitted from ABPP, particularly in the area of competitive inhibition assays, in which biological samples are pre-incubated with drug candidates, then made to compete with ABPP probes for binding to target enzymes. Compounds with high affinity to their targets will prevent binding of the probe, and the degree of probe binding can be used as an indication of compound affinity. Because ABPP probes label classes of enzymes, this approach can also be used to profile drug selectivity, as highly selective compounds will ideally outcompete probes at only a small number of proteins.
Photoaffinity labeling
Unlike ABPP, which results in protein labeling upon probe binding, photoaffinity labeling probes require activation by photolysis before covalent bonding to a protein occurs. The presence of a photoreactive group makes this possible. These probes are composed of three connected moieties: (1) a drug scaffold; (2) a photoreactive group, such as an phenylazide, phenyldiazirine, or benzophenone; and (3) an identification tag, such as biotin, a fluorescent dye, or a click chemistry handle. The drug scaffold is typically an analog of a drug whose mechanism is being studied, and, importantly, binds to the target reversibly, which better mimics the interaction between most drugs and their targets. There are several varieties of photoreactive groups, but they are fundamentally different from ABPP probes: while ABPP specifically labels nucleophilic amino acids in a target's active site, photoaffinity labeling is non-specific, and thus is applicable to labeling a wider range of targets. The identification tag will vary depending on the type of analysis being done: biotin and click chemistry handles are suitable for enrichment of labeled proteins prior to mass spectrometry based identification, while fluorescent dyes are used when using a gel-based imaging method, such as SDS-PAGE, to validate interaction with a target.
Because photoaffinity probes are multifunctional, they are difficult to design. Chemists incorporate the same principles of structure-activity relationship modeling into photoaffinity probes that apply to drugs, but must do so without compromising the drug scaffold's activity or the photoreactive group's ability to bond. Since photoreactive groups bond indiscriminately, improper design can cause the probe to label itself or non-target proteins. The probe must remain stable in storage, across buffers, at various pH levels, and in living systems to ensure that labeling occurs only when exposed to light. Activation by light must also be fine-tuned, as radiation can damage cells.
Immobilization-based approaches
Immobilization-based chemoproteomic techniques encompass variations on microbead-based affinity pull-down, which is similar to immunoprecipitation, and affinity chromatography. In both cases, a solid support is used as an immobilization surface bearing a bait molecule. The bait molecule can be a potential drug if the investigator is trying to identify targets, or a target, such as an immobilized enzyme, if the investigator is screening for small molecules. The bait is exposed to a mixture of potential binding partners, which can be identified after removing non-binding components.
Microbead-based immobilization
Microbead-based immobilization is a modular technique in that it allows the investigator to decide whether they wish to fish for protein targets from the proteome or drug-like compounds from chemical libraries. The macroscopic properties of microbeads make them amenable to relatively low labor enrichment applications, since they are easily to visualize and their bulk mass is readily removable protein solutions. Microbeads were historically made of inert polymers, such as agarose and dextran, that are functionalized to attach a bait of choice. In the case of using proteins as bait, amine functional groups are common linkers to facilitate attachment. More modern approaches have benefitted from the popularization of dynabeads, a type of magnetic microbead, which enable magnetic separation of bead-immobilized analytes from treated samples. Magnetic beads exhibit superparamagnetic properties, which make them very easy to remove from solution using an external magnet. In a simplified workflow, magnetic beads are used to immobilize a protein target, then the beads are mixed with a chemical library to screen for potential ligands. High-affinity ligands bind to the immobilized target and resist removal by washing, so they are enriched in the sample. Conversely, a ligand of interest can be immobilized and screened against proteome proteins by incubation with a lysate.
Hybrid solution- and immobilization-based strategies have been applied, in which ligands functionalized with an enrichment tag, such as biotin, are allowed to float freely in solution and find their target proteins. After an incubation period, ligand-protein complexes can be reacted with streptavidin-coated beads, which bind the biotin tag and allow for pull-down and identification of interaction partners. This technology can be extended to assist with preparation of samples for ABPP and photoaffinity labeling. While immobilization approaches have been reproducible and successful, it is impossible to avoid the limitation of immobilization-induced steric hindrance, which interferes with induced fit. Another drawback is non-specific adsorption of both proteins and small molecules to the bead surface, which has the potential to generate false positives.
Affinity chromatography
Affinity chromatography emerged in the 1950s as a rarely used method used to purify enzymes; it has since seen mainstream use and is the oldest among chemoproteomic approaches. Affinity chromatography is performed following one of two basic formats: ligand immobilization or target immobilization. Under the ligand immobilization format, a ligand of interest - often a drug lead - is immobilized within a chromatography column and acts as the stationary phase. A complex sample consisting of many proteins, such as a cell lysate, is passed through the column and the target of interest binds to the immobilized ligand while other sample components pass through the column unretained. Under the target immobilization format, a target of interest - often a disease-relevant protein - is immobilized within a chromatography column and acts as the stationary phase. Pooled compound libraries are then passed through the column in an application buffer, ligands are retained through binding interactions with the stationary phase, and other compounds pass through the column unretained. In both cases, retained analytes can be eluted from the column and identified using mass spectrometry. A table of elution strategies is provided below.
Derivatization-free approaches
While the approaches above have shown success, they are inherently limited by their need for derivatization, which jeopardizes the affinity of the interaction that derivatized compounds are said to emulate and introduces steric hindrance. Immobilized ligands and targets are limited in their ability to move freely through space in a way that replicates the native protein-ligand interaction, and conformational change from induced fit is often limited when proteins or drugs are immobilized. Probe-based approaches also alter the three-dimensional nature of the ligand-protein interaction by introducing functional groups to the ligand, which can alter compound activity. Derivatization-free approaches aim to infer interactions by proxy, often through observations of changes to protein stability upon binding, and sometimes through chromatographic co-elution.
The stability-based methods below are thought to work due to ligand-induced shifts in equilibrium concentrations of protein conformational states. A single protein type in solution may be represented by individual molecules in a variety of conformations, with many of them different from one another despite being identical in amino acid sequence. Upon binding a drug, the majority of ligand-bound protein enters an energetically favorable conformation, and moves away from the unpredictable distribution of less stable conformers. Thus, ligand binding is said to stabilize proteins, making them resistant to thermal, enzymatic and chemical degradation. Some examples of stability-based derivatization-free approaches follow.
Thermal proteome profiling (TPP)
Thermal proteome profiling (also, Cellular Thermal Shift Assay) is recently popularized strategy to infer ligand-protein interactions from shifts in protein thermal stability induced by ligand binding. In a typical assay setup, protein-containing samples are exposed to a ligand of choice, then those samples are aliquoted and heated to separate individual temperature points. Upon binding to a ligand, a protein's thermal stability is expected to increase, so ligand-bound proteins will be more resistant to thermal denaturation. After heating, the amount of non-denatured protein remaining is analyzed using quantitative proteomics and stability curves are generated. Upon comparison to an untreated stability curve, the treated curve is expected to shift to the right, indicating that ligand-induced stabilization occurred. Historically, thermal proteome profiling has been assessed using a western blot against a known target of interest. With the advent of high resolution Orbitrap mass spectrometers, this type of experiment can be executed on a proteome-wide scale and stability curves can be generated for thousands of proteins at once. Thermal proteome profiling has been successfully performed in vitro, in situ, and in vivo. When coupled with mass spectrometry, this technique is referred to as the Mass Spectrometry Cellular Thermal Shift Assay (MS-CETSA).
Drug affinity responsive target stability (DARTS)
The Drug Affinity Responsive Target Stability assay follows a similar basic assumption to TPP – that protein stability is increased by ligand binding. In DARTS, however, protein stability is assessed in response to digestion by a protease. Briefly, a sample of cell lysate is incubated with a small molecule of interest, the sample is split into aliquots, and each aliquot goes through limited proteolysis after addition of protease. Limited proteolysis is critical, since complete proteolysis would render even a ligand-bound protein completely digested. Samples are then analyzed via SDS-PAGE to assess differences in extent of digestion, and bands are then excised and analyzed via mass spectrometry to confirm the identities of proteins that are resist proteolysis. Alternatively, if the target is already suspected and is being tested for validation, a western blot protocol can be used to identify protein directly.
Stability of proteins from rates of oxidation (SPROX)
Stability of Proteins from Rates of Oxidation also rests upon the assumption that ligand binding confers protection to proteins from manners of degradation, this time from oxidation of methionine residues. In SPROX, a lysate is split and treated with drug or a DMSO control, then each group is further aliquoted into separate samples with increasing concentrations of the chaotrope and denaturant guanidinium hydrochloride (GuHCl). Depending on the concentration of GuHCl, proteins will unfold to varying degrees. Each sample is then reacted with hydrogen peroxide, which oxidizes methionine residues. Proteins that are stabilized by the drug will remain folded at higher concentrations of GuHCl and will experience less methionine oxidation. Oxidized methionine residues can be quantified via LC-MS/MS and used to generate methionine stability curves, which are a proxy for drug binding. There are drawbacks to the SPROX assay, namely that the only relevant peptides from SPROX samples are those with methionine residues, which account for approximately one-third of peptides, and for which there are currently no viable enrichment techniques. Only those methionines that are exposed to oxidation provide meaningful information, and not all differences in methionine oxidation are consistent with protein stabilization. Without enrichment, LC-MS/MS analysis of these peptides is challenging, as the contribution of other sample components to mass spectrometer noise can drown out relevant signal. Therefore, SPROX samples require fractionation to concentrate peptides of interest prior to LC-MS/MS analysis.
Affinity selection-mass spectrometry
While adoption of affinity selection-mass spectrometry (AS-MS) has led to an expansion of assay formats, the general technique follows a simple scheme. Protein targets are incubated with small molecules to allow for the formation of stable ligand-protein complexes, unbound small molecules are removed from the mixture, and the components of remaining ligand-protein complexes are analyzed using mass spectrometry. The bound ligands identified are then categorized as hits and can be used to provide a starting point for lead generation. Since AS-MS measures binding in an unbiased manner, a hit does not need to be tied to a functional readout, opening the possibility of identifying drugs that act beyond active sites, such as allosteric modulators and chemical chaperones, all in a single assay. Because small molecules can be directly identified by their exact mass, no derivatization is needed to confirm the validity of a hit. Among derivatization- and label-free approaches, AS-MS has the unique advantage of being amenable to the assessment of multiple test compounds per experiment—as many as 20,000 compounds per experiment have been reported in the literature, and one group has reported assaying chemical libraries against heterogeneous protein pools. The basic steps of AS-MS are described in more detail below.
Affinity selection
A generalized AS-MS workflow begins with the pre-incubation of purified protein solutions (i.e. target proteins) with chemical libraries in microplates. Assays can be designed to contain sufficiently high protein concentrations to prevent competition for binding sites between structural analogs, ensuring that hits across a range of affinities can be identified; inversely, assays can contain low protein concentrations to allow for distinction between high and low affinity analogs and to inform structure-activity relationships. The choice of a chemical library is less stringent than other high-throughput screening strategies owing to the lack of functional readouts, which would otherwise require deconvolution of the source compound that generates biological activity. Thus, the typical range for AS-MS is 400-3,000 compounds per pool. Other considerations for screening are more practical, such as a need to balance desired compound concentration, which is usually in the micromolar range, with the fact that compound stock solutions are typically stored as 10 millimolar solutions, effectively capping the number of compounds screened in the thousands. After appropriate test compounds and targets are selected and incubated, ligand-protein complexes can be separated by a variety of means.
Separation of unbound small molecules and ligand-protein complexes
Affinity selection is followed by the removal of unbound small molecules via ultrafiltration or size-exclusion chromatography, making only protein-bound ligands available for downstream analysis. Several types of ultrafiltration have been reported with varying degrees of throughput, including pressure-based, centrifugal, and precipitation-based ultrafiltration. Under both pressure-based and centrifugal formats, unbound small molecules are forced through a semipermeable membrane that excludes proteins on the basis of size. Multiple washing steps are required after ultrafiltration to ensure complete removal of unbound small molecules. Ultrafiltration can also be confounded by non-specific adsorption of unbound small molecules to the membrane. A group at the University of Illinois published a screening strategy involving amyloid-beta, in which ligands were used to stabilize the protein and prevent its aggregation. Ultrafiltration was used to precipitate aggregated amyloid-beta and remove unbound ligands, while the ligand-stabilized protein was detected and quantified using mass spectrometry.
Size-exclusion chromatography (SEC) is more widely used in industrial drug discovery and has the advantage of more efficient removal of unbound compounds as compared to ultrafiltration. Size-exclusion approaches have been described in both high-performance liquid chromatography (HPLC) based and spin column formats. In either case, a mixture of unbound ligands, proteins, ligand-protein complexes is passed through a column of porous beads. Ligand-protein complexes are excluded from entering the beads and exit the column quickly, while unbound ligands must travel through the beads and are retained by the column for a longer time. Ligands that elute from the column early on are therefore inferred to be bound to a protein. The automated ligand identification system (ALIS), developed by Schering-Plough, uses a combined HPLC-based SEC to liquid chromatography-mass spectrometry (LC-MS) system that separates ligand-protein complexes from unbound ligands using SEC and diverts the complex toward an LC-MS system for on-line analysis of bound ligands. Novartis' SpeedScreen uses SEC in 96-well spin column format, also known as gel filtration chromatography, which allows for simultaneous removal of unbound ligands from up to 96 samples. Samples are also passed through porous beads, but centrifugation is used to move the sample through the column. SpeedScreen is not coupled to an LC-MS system and requires further processing prior to final analysis. In this case, ligands must be freed from their targets and analyzed separately.
Analysis of bound ligands
The final step requires bioanalytical separation of bound ligands from their targets, and subsequent identification of ligands using liquid chromatography-mass spectrometry. AS-MS offers means for identifying small molecule-protein interactions either directly - through top-down proteomic detection of intact complexes - or indirectly - through denaturation of small molecule-protein complexes followed by identification of small molecules using mass spectrometry. The top-down approach requires direct infusion of the complex into an electrospray ionization mass spectrometry source under conditions gentle enough to preserve the interaction and maintain its integrity in the transition from liquid to gas. While this was shown to be possible by Ganem and Henion in 1991, it is not suitable for high throughput. Interestingly, electron capture dissociation, which is typically used in structure elucidation of peptides, has been used to identify ligand binding sites during top-down analysis. A simpler method for analysis of bound ligands uses a protein precipitation extraction to denature proteins and release ligands into the precipitation solution, which can then be diluted and identified on an LC-MS system.
Target identification by chromatographic co-elution (TICC)
Target identification by chromatographic co-elution does not rely on differences in protein stability after drug treatment. Instead, it rests on the assumption that drugs form stable complexes with their target proteins, and that those complexes are robust enough to survive a chromatographic separation. In a typical workflow, a cell lysate is incubated with a drug, then the lysate is injected onto an ion-exchange chromatography system and fractionated. Lysate proteins are eluted along an ionic strength gradient and fractions are collected over short time intervals. Each fraction is analyzed by LC-MS/MS for both protein and drug content. Individual proteins elute with characteristic profiles, and pre-incubated drugs should mirror the elution profiles of the targets they complex with. Correlation of drug and protein elution profiles allows for targets to be narrowed down and inferred.
Computational approaches
Molecular docking simulations
The development and application of bench-top chemoproteomics assays is often time consuming and cost-prohibitive. Molecular docking simulations have emerged as relatively low-cost, high-throughput means for ranking the strength of small molecule-protein interactions. Molecular docking requires accurate modeling of both ligand and protein conformation at atomic resolution, and is therefore aided by empirical determination of protein structure, often through orthogonal methods such as x-ray crystallography and cryogenic electron microscopy. Molecular docking strategies are categorized by the type of information that is already known about the ligand and protein of interest.
Ligand-based methods
When a bioactive ligand with a known structure is to be screened against a protein with limited structural information, modeling is done with regard to ligand structure. Pharmacophore modeling identifies key electronic and structural features that are associated with therapeutic activity across similarly bioactive structural analogs, and accordingly requires large libraries with corresponding experimental data to enhance predictive power. Compound structures are superimposed virtually and common elements are scored on the basis of their tendency toward bioactivity. The move away from lock-and-key based modeling toward induced-fit based modeling has improved binding predictions but has also given rise to the challenge of modeling ligand flexibility, which requires building a database of conformational models and uses large amounts of data storage space. Another approach is the so-called on-the-fly method, in which conformational models are tested during the process of pharmacophore modeling, without a database; this method requires significantly less storage space at the cost of high computing time. A second challenge arises from the decision of how to superimpose analog structures. A common approach is to use a least-squares regression for superimposition, but this requires user-selected anchor points and therefore introduces human bias into the process. Pharmacophore models require training data sets, giving rise to another challenge—selection of the appropriate library of compounds to adequately train models. Data set size and chemical diversity significantly affect performance of the downstream product.
Structure-based methods
Ideally, the structure of a drug target is known, which allows for structure-based pharmacophore modeling. A structure-based model integrates key structural properties of the protein's binding site, such as the spatial distribution of interaction points, with features identified from ligand based pharmacophore models to generate a holistic simulation of the ligand-protein interaction. A major challenge in structure-based modeling is to narrow down pharmacophore features, of which many are initially identified, to a set of high priority features, as modeling too many features is a computational challenge. Another challenge is the incompatibility of pharmacophore modeling with quantitative structure-activity relationship (QSAR) profiling. Accurate QSAR models rely on inclusion of many potential targets, not just the therapeutic target. For example, important pharmacophores may yield high-affinity interactions with therapeutic targets, but they may also lead to undesirable off-target activity, and they may also be substrates of metabolic enzymes, such as Cytochrome P450s. Therefore, pharmacophore modeling against therapeutic targets is only one component of the compound's total structure-activity relationship.
Applications
Druggability
Chemoproteomic strategies have been used to expand the scope of druggable targets. While historically successful drugs target well-defined binding pockets of druggable proteins, these define only about 15% of the annotated proteome. To continue growing our pharmacopoeia, bold approaches to ligand discovery are required. The use of ABPP has coincidentally reinvigorated the search for newly ligandable sites. ABPP probes, intentionally used to label enzyme active sites, have been found to label many nucleophilic regions on many different proteins unintentionally. Originally thought to be experimental noise, these unintended reactions have clued researchers to the presence of sites that can potentially be targeted by novel covalent drugs. This is particularly salient in the case of proteins with no enzymatic activity to inhibit, or with mutated drug resistant proteins. In any of these cases, proteins can potentially be targeted for degradation using the novel drug modality of proteolysis-targeting-chimeras (PROTACs). PROTACs are heterobifunctional small molecules that are designed to interact with a target and an E3 ubiquitin ligase. The interaction brings the E3 ubiquitin ligase close enough to the target that the target is labeled for degradation. The existence of potential covalent binding sites across the proteome suggests that many drugs can be covalently targeted using such a modality.
Drug repurposing
Chemoproteomics is at the forefront of drug repurposing. This is particularly relevant in the era of COVID-19, which saw a dire need to rapidly identify FDA approved drugs that have antiviral activity. In this context, a phenotypic screen is usually employed to identify drugs with a desired effect in vitro, such as inhibition of viral plaque formation. If a drug produces a positive test, the next step is to determine whether it is acting on a known or novel target. Chemoproteomics is thus a follow-up to phenotypic screening. In the case of COVID-19, Friman et al investigated off-target effects of the broad-spectrum antiviral Remdesivir, which was among the first repurposed drugs to be used in the pandemic. Remdesivir was tested via thermal proteome profiling in a HepG2 cellular thermal shift assay, along with the controversial drug hydroxychloroquine, and investigators discovered TRIP13 as a potential off-target of Remdesivir.
High-throughput screening
Approved drugs are never identified as hits in high-throughput screens because the chemical libraries used in screening have not been optimized against any targets. However, methods like affinity chromatography and affinity selection-mass spectrometry are workhorses of the pharmaceutical industry, and AS-MS particularly has been documented to produce a significant number of hits across many classes of difficult-to-drug proteins. This is due in large part to the sheer volume of ligands that can be screened in a single assay. Researchers at the iHuman Institute at ShanghaiTech University employed of scheme in which 20,000 compounds per pool were screened against A2AR, a difficult G-protein coupled receptor to drug, with a 0.12% hit rate, leading to several high affinity ligands.
See also
Chemical genetics
Chemical biology
Drug discovery
Omics
Phenotypic screening
Proteomics
References
Chemical biology
Chemistry
Branches of biology | 0.801603 | 0.964629 | 0.77325 |
Test method | A test method is a method for a test in science or engineering, such as a physical test, chemical test, or statistical test. It is a definitive procedure that produces a test result. In order to ensure accurate and relevant test results, a test method should be "explicit, unambiguous, and experimentally feasible.", as well as effective and reproducible.
A test can be considered an observation or experiment that determines one or more characteristics of a given sample, product, process, or service. The purpose of testing involves a prior determination of expected observation and a comparison of that expectation to what one actually observes. The results of testing can be qualitative (yes/no), quantitative (a measured value), or categorical and can be derived from personal observation or the output of a precision measuring instrument.
Usually the test result is the dependent variable, the measured response based on the particular conditions of the test or the level of the independent variable. Some tests, however, may involve changing the independent variable to determine the level at which a certain response occurs: in this case, the test result is the independent variable.
Importance
In software development, engineering, science, manufacturing, and business, its developers, researchers, manufacturers, and related personnel must understand and agree upon methods of obtaining data and making measurements. It is common for a physical property to be strongly affected by the precise method of testing or measuring that property. As such, fully documenting experiments and measurements while providing needed documentation and descriptions of specifications, contracts, and test methods is vital.
Using a standardized test method, perhaps published by a respected standards organization, is a good place to start. Sometimes it is more useful to modify an existing test method or to develop a new one, though such home-grown test methods should be validated and, in certain cases, demonstrate technical equivalency to primary, standardized methods. Again, documentation and full disclosure are necessary.
A well-written test method is important. However, even more important is choosing a method of measuring the correct property or characteristic. Not all tests and measurements are equally useful: usually a test result is used to predict or imply suitability for a certain purpose. For example, if a manufactured item has several components, test methods may have several levels of connections:
test results of a raw material should connect with tests of a component made from that material
test results of a component should connect with performance testing of a complete item
results of laboratory performance testing should connect with field performance
These connections or correlations may be based on published literature, engineering studies, or formal programs such as quality function deployment. Validation of the suitability of the test method is often required.
Content
Quality management systems usually require full documentation of the procedures used in a test. The document for a test method might include:
descriptive title
scope over which class(es) of items, policies, etc. may be evaluated
date of last effective revision and revision designation
reference to most recent test method validation
person, office, or agency responsible for questions on the test method, updates, and deviations
significance or importance of the test method and its intended use
terminology and definitions to clarify the meanings of the test method
types of apparatus and measuring instrument (sometimes the specific device) required to conduct the test
sampling procedures (how samples are to be obtained and prepared, as well as the sample size)
safety precautions
required calibrations and metrology systems
natural environment concerns and considerations
testing environment concerns and considerations
detailed procedures for conducting the test
calculation and analysis of data
interpretation of data and test method output
report format, content, data, etc.
Validation
Test methods are often scrutinized for their validity, applicability, and accuracy. It is very important that the scope of the test method be clearly defined, and any aspect included in the scope is shown to be accurate and repeatable through validation.
Test method validations often encompass the following considerations:
accuracy and precision; demonstration of accuracy may require the creation of a reference value if none is yet available
repeatability and reproducibility, sometimes in the form of a Gauge R&R.
range, or a continuum scale over which the test method would be considered accurate (e.g., 10 N to 100 N force test)
measurement resolution, be it spatial, temporal, or otherwise
curve fitting, typically for linearity, which justifies interpolation between calibrated reference points
robustness, or the insensitivity to potentially subtle variables in the test environment or setup which may be difficult to control
usefulness to predict end-use characteristics and performance
measurement uncertainty
interlaboratory or round robin tests
other types of measurement systems analysis
See also
Certified reference materials
Data analysis
Design of experiments
Document management system
EPA Methods
Integrated test facility
Measurement systems analysis
Measurement uncertainty
Metrication
Observational error
Replication (statistics)
Sampling (statistics)
Specification (technical standard)
Test management approach
Verification and validation
References
General references, books
Pyzdek, T, "Quality Engineering Handbook", 2003,
Godfrey, A. B., "Juran's Quality Handbook", 1999,
Kimothi, S. K., "The Uncertainty of Measurements: Physical and Chemical Metrology: Impact and Analysis", 2002,
Related standards
ASTM E177 Standard Practice for Use of the Terms Precision and Bias in ASTM Test Methods
ASTM E691 Standard Practice for Conducting an Interlaboratory Study to Determine the Precision of a Test Method
ASTM E1488 Standard Guide for Statistical Procedures to Use in Developing and Applying Test Methods
ASTM E2282 Standard Guide for Defining the Test Result of a Test Method
ASTM E2655 - Standard Guide for Reporting Uncertainty of Test Results and Use of the Term Measurement Uncertainty in ASTM Test Methods
Metrology
Measurement
Quality control | 0.780356 | 0.990892 | 0.773249 |
Sensitivity analysis | Sensitivity analysis is the study of how the uncertainty in the output of a mathematical model or system (numerical or otherwise) can be divided and allocated to different sources of uncertainty in its inputs. This involves estimating sensitivity indices that quantify the influence of an input or group of inputs on the output. A related practice is uncertainty analysis, which has a greater focus on uncertainty quantification and propagation of uncertainty; ideally, uncertainty and sensitivity analysis should be run in tandem.
Motivation
A mathematical model (for example in biology, climate change, economics, renewable energy, agronomy...) can be highly complex, and as a result, its relationships between inputs and outputs may be faultily understood. In such cases, the model can be viewed as a black box, i.e. the output is an "opaque" function of its inputs. Quite often, some or all of the model inputs are subject to sources of uncertainty, including errors of measurement, errors in input data, parameter estimation and approximation procedure, absence of information and poor or partial understanding of the driving forces and mechanisms, choice of underlying hypothesis of model, and so on. This uncertainty limits our confidence in the reliability of the model's response or output. Further, models may have to cope with the natural intrinsic variability of the system (aleatory), such as the occurrence of stochastic events.
In models involving many input variables, sensitivity analysis is an essential ingredient of model building and quality assurance and can be useful to determine the impact of a uncertain variable for a range of purposes, including:
Testing the robustness of the results of a model or system in the presence of uncertainty.
Increased understanding of the relationships between input and output variables in a system or model.
Uncertainty reduction, through the identification of model input that cause significant uncertainty in the output and should therefore be the focus of attention in order to increase robustness.
Searching for errors in the model (by encountering unexpected relationships between inputs and outputs).
Model simplification – fixing model input that has no effect on the output, or identifying and removing redundant parts of the model structure.
Enhancing communication from modelers to decision makers (e.g. by making recommendations more credible, understandable, compelling or persuasive).
Finding regions in the space of input factors for which the model output is either maximum or minimum or meets some optimum criterion (see optimization and Monte Carlo filtering).
For calibration of models with large number of parameters, by focusing on the sensitive parameters.
To identify important connections between observations, model inputs, and predictions or forecasts, leading to the development of better models.
Mathematical formulation and vocabulary
The object of study for sensitivity analysis is a function , (called "mathematical model" or "programming code"), viewed as a black box, with the -dimensional input vector and the output , presented as following:
The variability in input parameters have an impact on the output . While uncertainty analysis aims to describe the distribution of the output (providing its statistics, moments, pdf, cdf,...), sensitivity analysis aims to measure and quantify the impact of each input or a group of inputs on the variability of the output (by calculating the corresponding sensitivity indices). Figure 1 provides a schematic representation of this statement.
Challenges, settings and related issues
Taking into account uncertainty arising from different sources, whether in the context of uncertainty analysis or sensitivity analysis (for calculating sensitivity indices), requires multiple samples of the uncertain parameters and, consequently, running the model (evaluating the -function) multiple times. Depending on the complexity of the model there are many challenges that may be encountered during model evaluation. Therefore, the choice of method of sensitivity analysis is typically dictated by a number of problem constraints, settings or challenges. Some of the most common are:
Computational expense: Sensitivity analysis is almost always performed by running the model a (possibly large) number of times, i.e. a sampling-based approach. This can be a significant problem when:
Time-consuming models are very often encountered when complex models are involved. A single run of the model takes a significant amount of time (minutes, hours or longer). The use of statistical model (meta-model, data-driven model) including HDMR to approximate the -function is one way of reducing the computation costs.
The model has a large number of uncertain inputs. Sensitivity analysis is essentially the exploration of the multidimensional input space, which grows exponentially in size with the number of inputs. Therefore, screening methods can be useful for dimension reduction. Another way to tackle the curse of dimensionality is to use sampling based on low discrepancy sequences.
Correlated inputs: Most common sensitivity analysis methods assume independence between model inputs, but sometimes inputs can be strongly correlated. Correlations between inputs must then be taken into account in the analysis.
Nonlinearity: Some sensitivity analysis approaches, such as those based on linear regression, can inaccurately measure sensitivity when the model response is nonlinear with respect to its inputs. In such cases, variance-based measures are more appropriate.
Multiple or functional outputs: Generally introduced for single-output codes, sensitivity analysis extends to cases where the output is a vector or function. When outputs are correlated, it does not preclude the possibility of performing different sensitivity analyses for each output of interest. However, for models in which the outputs are correlated, the sensitivity measures can be hard to interpret.
Stochastic code: A code is said to be stochastic when, for several evaluations of the code with the same inputs, different outputs are obtained (as opposed to a deterministic code when, for several evaluations of the code with the same inputs, the same output is always obtained). In this case, it is necessary to separate the variability of the output due to the variability of the inputs from that due to stochasticity.
Data-driven approach: Sometimes it is not possible to evaluate the code at all desired points, either because the code is confidential or because the experiment is not reproducible. The code output is only available for a given set of points, and it can be difficult to perform a sensitivity analysis on a limited set of data. We then build a statistical model (meta-model, data-driven model) from the available data (that we use for training) to approximate the code (the -function).
To address the various constraints and challenges, a number of methods for sensitivity analysis have been proposed in the literature, which we will examine in the next section.
Sensitivity analysis methods
There are a large number of approaches to performing a sensitivity analysis, many of which have been developed to address one or more of the constraints discussed above. They are also distinguished by the type of sensitivity measure, be it based on (for example) variance decompositions, partial derivatives or elementary effects. In general, however, most procedures adhere to the following outline:
Quantify the uncertainty in each input (e.g. ranges, probability distributions). Note that this can be difficult and many methods exist to elicit uncertainty distributions from subjective data.
Identify the model output to be analysed (the target of interest should ideally have a direct relation to the problem tackled by the model).
Run the model a number of times using some design of experiments, dictated by the method of choice and the input uncertainty.
Using the resulting model outputs, calculate the sensitivity measures of interest.
In some cases this procedure will be repeated, for example in high-dimensional problems where the user has to screen out unimportant variables before performing a full sensitivity analysis.
The various types of "core methods" (discussed below) are distinguished by the various sensitivity measures which are calculated. These categories can somehow overlap. Alternative ways of obtaining these measures, under the constraints of the problem, can be given. In addition, an engineering view of the methods that takes into account the four important sensitivity analysis parameters has also been proposed.
Visual analysis
The first intuitive approach (especially useful in less complex cases) is to analyze the relationship between each input and the output using scatter plots, and observe the behavior of these pairs. The diagrams give an initial idea of the correlation and which input has an impact on the output. Figure 2 shows an example where two inputs, and are highly correlated with the output.
One-at-a-time (OAT)
One of the simplest and most common approaches is that of changing one-factor-at-a-time (OAT), to see what effect this produces on the output. OAT customarily involves
moving one input variable, keeping others at their baseline (nominal) values, then,
returning the variable to its nominal value, then repeating for each of the other inputs in the same way.
Sensitivity may then be measured by monitoring changes in the output, e.g. by partial derivatives or linear regression. This appears a logical approach as any change observed in the output will unambiguously be due to the single variable changed. Furthermore, by changing one variable at a time, one can keep all other variables fixed to their central or baseline values. This increases the comparability of the results (all 'effects' are computed with reference to the same central point in space) and minimizes the chances of computer program crashes, more likely when several input factors are changed simultaneously.
OAT is frequently preferred by modelers because of practical reasons. In case of model failure under OAT analysis the modeler immediately knows which is the input factor responsible for the failure.
Despite its simplicity however, this approach does not fully explore the input space, since it does not take into account the simultaneous variation of input variables. This means that the OAT approach cannot detect the presence of interactions between input variables and is unsuitable for nonlinear models.
The proportion of input space which remains unexplored with an OAT approach grows superexponentially with the number of inputs. For example, a 3-variable parameter space which is explored one-at-a-time is equivalent to taking points along the x, y, and z axes of a cube centered at the origin. The convex hull bounding all these points is an octahedron which has a volume only 1/6th of the total parameter space. More generally, the convex hull of the axes of a hyperrectangle forms a hyperoctahedron which has a volume fraction of . With 5 inputs, the explored space already drops to less than 1% of the total parameter space. And even this is an overestimate, since the off-axis volume is not actually being sampled at all. Compare this to random sampling of the space, where the convex hull approaches the entire volume as more points are added. While the sparsity of OAT is theoretically not a concern for linear models, true linearity is rare in nature.
Morris
Named after statistician Max D. Morris this method is suitable for screening systems with many parameters. This is also known as method of elementary effects because it combines repeated steps along the various parametric axes.
Derivative-based local methods
Local derivative-based methods involve taking the partial derivative of the output with respect to an input factor :
where the subscript x0 indicates that the derivative is taken at some fixed point in the space of the input (hence the 'local' in the name of the class). Adjoint modelling and Automated Differentiation are methods which allow to compute all partial derivatives at a cost at most 4-6 times of that for evaluating the original function. Similar to OAT, local methods do not attempt to fully explore the input space, since they examine small perturbations, typically one variable at a time. It is possible to select similar samples from derivative-based sensitivity through Neural Networks and perform uncertainty quantification.
One advantage of the local methods is that it is possible to make a matrix to represent all the sensitivities in a system, thus providing an overview that cannot be achieved with global methods if there is a large number of input and output variables.
Regression analysis
Regression analysis, in the context of sensitivity analysis, involves fitting a linear regression to the model response and using standardized regression coefficients as direct measures of sensitivity. The regression is required to be linear with respect to the data (i.e. a hyperplane, hence with no quadratic terms, etc., as regressors) because otherwise it is difficult to interpret the standardised coefficients. This method is therefore most suitable when the model response is in fact linear; linearity can be confirmed, for instance, if the coefficient of determination is large. The advantages of regression analysis are that it is simple and has a low computational cost.
Variance-based methods
Variance-based methods are a class of probabilistic approaches which quantify the input and output uncertainties as random variables, represented via their probability distributions, and decompose the output variance into parts attributable to input variables and combinations of variables. The sensitivity of the output to an input variable is therefore measured by the amount of variance in the output caused by that input.
This amount is quantified and calculated using Sobol indices: they represent the proportion of variance explained by an input or group of inputs. This expression essentially measures the contribution of alone to the uncertainty (variance) in (averaged over variations in other variables), and is known as the first-order sensitivity index or main effect index .
For an input , Sobol index is defined as following:
where and denote the variance and expected value operators respectively.
Importantly, first-order sensitivity index of does not measure the uncertainty caused by interactions has with other variables. A further measure, known as the total effect index , gives the total variance in caused by and its interactions with any of the other input variables. The total effect index is given as following: where denotes the set of all input variables except .
Variance-based methods allow full exploration of the input space, accounting for interactions, and nonlinear responses. For these reasons they are widely used when it is feasible to calculate them. Typically this calculation involves the use of Monte Carlo methods, but since this can involve many thousands of model runs, other methods (such as metamodels) can be used to reduce computational expense when necessary.
Moment-independent methods
Moment-independent methods extend variance-based techniques by considering the probability density or cumulative distribution function of the model output . Thus, they do not refer to any particular moment of , whence the name.
The moment-independent sensitivity measures of , here denoted by , can be defined through an equation similar to variance-based indices replacing the conditional expectation with a distance, as , where is a statistical distance [metric or divergence] between probability measures, and are the marginal and conditional probability measures of .
If is a distance, the moment-independent global sensitivity measure satisfies zero-independence. This is a relevant statistical property also known as Renyi's postulate D.
The class of moment-independent sensitivity measures includes indicators such as the -importance measure, the new correlation coefficient of Chatterjee, the Wasserstein correlation of Wiesel and the kernel-based sensitivity measures of Barr and Rabitz.
Another measure for global sensitivity analysis, in the category of moment-independent approaches, is the PAWN index.
Variogram analysis of response surfaces (VARS)
One of the major shortcomings of the previous sensitivity analysis methods is that none of them considers the spatially ordered structure of the response surface/output of the model in the parameter space. By utilizing the concepts of directional variograms and covariograms, variogram analysis of response surfaces (VARS) addresses this weakness through recognizing a spatially continuous correlation structure to the values of , and hence also to the values of .
Basically, the higher the variability the more heterogeneous is the response surface along a particular direction/parameter, at a specific perturbation scale. Accordingly, in the VARS framework, the values of directional variograms for a given perturbation scale can be considered as a comprehensive illustration of sensitivity information, through linking variogram analysis to both direction and perturbation scale concepts. As a result, the VARS framework accounts for the fact that sensitivity is a scale-dependent concept, and thus overcomes the scale issue of traditional sensitivity analysis methods. More importantly, VARS is able to provide relatively stable and statistically robust estimates of parameter sensitivity with much lower computational cost than other strategies (about two orders of magnitude more efficient). Noteworthy, it has been shown that there is a theoretical link between the VARS framework and the variance-based and derivative-based approaches.
Fourier amplitude sensitivity test (FAST)
The Fourier amplitude sensitivity test (FAST) uses the Fourier series to represent a multivariate function (the model) in the frequency domain, using a single frequency variable. Therefore, the integrals required to calculate sensitivity indices become univariate, resulting in computational savings.
Shapley effects
Shapley effects rely on Shapley values and represent the average marginal contribution of a given factors across all possible combinations of factors. These value are related to Sobol’s indices as their value falls between the first order Sobol’ effect and the total order effect.
Chaos polynomials
The principle is to project the function of interest onto a basis of orthogonal polynomials. The Sobol indices are then expressed analytically in terms of the coefficients of this decomposition.
Complementary research approaches for time-consuming simulations
A number of methods have been developed to overcome some of the constraints discussed above, which would otherwise make the estimation of sensitivity measures infeasible (most often due to computational expense). Generally, these methods focus on efficiently (by creating a metamodel of the costly function to be evaluated and/or by “ wisely ” sampling the factor space) calculating variance-based measures of sensitivity.
Metamodels
Metamodels (also known as emulators, surrogate models or response surfaces) are data-modeling/machine learning approaches that involve building a relatively simple mathematical function, known as an metamodels, that approximates the input/output behavior of the model itself. In other words, it is the concept of "modeling a model" (hence the name "metamodel"). The idea is that, although computer models may be a very complex series of equations that can take a long time to solve, they can always be regarded as a function of their inputs . By running the model at a number of points in the input space, it may be possible to fit a much simpler metamodels , such that to within an acceptable margin of error. Then, sensitivity measures can be calculated from the metamodel (either with Monte Carlo or analytically), which will have a negligible additional computational cost. Importantly, the number of model runs required to fit the metamodel can be orders of magnitude less than the number of runs required to directly estimate the sensitivity measures from the model.
Clearly, the crux of an metamodel approach is to find an (metamodel) that is a sufficiently close approximation to the model . This requires the following steps,
Sampling (running) the model at a number of points in its input space. This requires a sample design.
Selecting a type of emulator (mathematical function) to use.
"Training" the metamodel using the sample data from the model – this generally involves adjusting the metamodel parameters until the metamodel mimics the true model as well as possible.
Sampling the model can often be done with low-discrepancy sequences, such as the Sobol sequence – due to mathematician Ilya M. Sobol or Latin hypercube sampling, although random designs can also be used, at the loss of some efficiency. The selection of the metamodel type and the training are intrinsically linked since the training method will be dependent on the class of metamodel. Some types of metamodels that have been used successfully for sensitivity analysis include:
Gaussian processes (also known as kriging), where any combination of output points is assumed to be distributed as a multivariate Gaussian distribution. Recently, "treed" Gaussian processes have been used to deal with heteroscedastic and discontinuous responses.
Random forests, in which a large number of decision trees are trained, and the result averaged.
Gradient boosting, where a succession of simple regressions are used to weight data points to sequentially reduce error.
Polynomial chaos expansions, which use orthogonal polynomials to approximate the response surface.
Smoothing splines, normally used in conjunction with high-dimensional model representation (HDMR) truncations (see below).
Discrete Bayesian networks, in conjunction with canonical models such as noisy models. Noisy models exploit information on the conditional independence between variables to significantly reduce dimensionality.
The use of an emulator introduces a machine learning problem, which can be difficult if the response of the model is highly nonlinear. In all cases, it is useful to check the accuracy of the emulator, for example using cross-validation.
High-dimensional model representations (HDMR)
A high-dimensional model representation (HDMR) (the term is due to H. Rabitz) is essentially an emulator approach, which involves decomposing the function output into a linear combination of input terms and interactions of increasing dimensionality. The HDMR approach exploits the fact that the model can usually be well-approximated by neglecting higher-order interactions (second or third-order and above). The terms in the truncated series can then each be approximated by e.g. polynomials or splines (REFS) and the response expressed as the sum of the main effects and interactions up to the truncation order. From this perspective, HDMRs can be seen as emulators which neglect high-order interactions; the advantage is that they are able to emulate models with higher dimensionality than full-order emulators.
Monte Carlo filtering
Sensitivity analysis via Monte Carlo filtering is also a sampling-based approach, whose objective is to identify regions in the space of the input factors corresponding to particular values (e.g., high or low) of the output.
Related concepts
Sensitivity analysis is closely related with uncertainty analysis; while the latter studies the overall uncertainty in the conclusions of the study, sensitivity analysis tries to identify what source of uncertainty weighs more on the study's conclusions.
The problem setting in sensitivity analysis also has strong similarities with the field of design of experiments. In a design of experiments, one studies the effect of some process or intervention (the 'treatment') on some objects (the 'experimental units'). In sensitivity analysis one looks at the effect of varying the inputs of a mathematical model on the output of the model itself. In both disciplines one strives to obtain information from the system with a minimum of physical or numerical experiments.
Sensitivity auditing
It may happen that a sensitivity analysis of a model-based study is meant to underpin an inference, and to certify its robustness, in a context where the inference feeds into a policy or decision-making process. In these cases the framing of the analysis itself, its institutional context, and the motivations of its author may become a matter of great importance, and a pure sensitivity analysis – with its emphasis on parametric uncertainty – may be seen as insufficient. The emphasis on the framing may derive inter-alia from the relevance of the policy study to different constituencies that are characterized by different norms and values, and hence by a different story about 'what the problem is' and foremost about 'who is telling the story'. Most often the framing includes more or less implicit assumptions, which could be political (e.g. which group needs to be protected) all the way to technical (e.g. which variable can be treated as a constant).
In order to take these concerns into due consideration the instruments of SA have been extended to provide an assessment of the entire knowledge and model generating process. This approach has been called 'sensitivity auditing'. It takes inspiration from NUSAP, a method used to qualify the worth of quantitative information with the generation of `Pedigrees' of numbers. Sensitivity auditing has been especially designed for an adversarial context, where not only the nature of the evidence, but also the degree of certainty and uncertainty associated to the evidence, will be the subject of partisan interests. Sensitivity auditing is recommended in the European Commission guidelines for impact assessment, as well as in the report Science Advice for Policy by European Academies.
Pitfalls and difficulties
Some common difficulties in sensitivity analysis include:
Assumptions vs. inferences: In uncertainty and sensitivity analysis there is a crucial trade off between how scrupulous an analyst is in exploring the input assumptions and how wide the resulting inference may be. The point is well illustrated by the econometrician Edward E. Leamer:
" I have proposed a form of organized sensitivity analysis that I call 'global sensitivity analysis' in which a neighborhood of alternative assumptions is selected and the corresponding interval of inferences is identified. Conclusions are judged to be sturdy only if the neighborhood of assumptions is wide enough to be credible and the corresponding interval of inferences is narrow enough to be useful."
Note Leamer's emphasis is on the need for 'credibility' in the selection of assumptions. The easiest way to invalidate a model is to demonstrate that it is fragile with respect to the uncertainty in the assumptions or to show that its assumptions have not been taken 'wide enough'. The same concept is expressed by Jerome R. Ravetz, for whom bad modeling is when uncertainties in inputs must be suppressed lest outputs become indeterminate.
Not enough information to build probability distributions for the inputs: Probability distributions can be constructed from expert elicitation, although even then it may be hard to build distributions with great confidence. The subjectivity of the probability distributions or ranges will strongly affect the sensitivity analysis.
Unclear purpose of the analysis: Different statistical tests and measures are applied to the problem and different factors rankings are obtained. The test should instead be tailored to the purpose of the analysis, e.g. one uses Monte Carlo filtering if one is interested in which factors are most responsible for generating high/low values of the output.
Too many model outputs are considered: This may be acceptable for the quality assurance of sub-models but should be avoided when presenting the results of the overall analysis.
Piecewise sensitivity: This is when one performs sensitivity analysis on one sub-model at a time. This approach is non conservative as it might overlook interactions among factors in different sub-models (Type II error).
SA in international context
The importance of understanding and managing uncertainty in model results has inspired many scientists from different research centers all over the world to take a close interest in this subject. National and international agencies involved in impact assessment studies have included sections devoted to sensitivity analysis in their guidelines. Examples are the European Commission (see e.g. the guidelines for impact assessment), the White House Office of Management and Budget, the Intergovernmental Panel on Climate Change and US Environmental Protection Agency's modeling guidelines.
Specific applications of sensitivity analysis
The following pages discuss sensitivity analyses in relation to specific applications:
Environmental sciences
Business
Epidemiology
Multi-criteria decision making
Model calibration
See also
Causality
Elementary effects method
Experimental uncertainty analysis
Fourier amplitude sensitivity testing
Info-gap decision theory
Interval FEM
Perturbation analysis
Probabilistic design
Probability bounds analysis
Robustification
ROC curve
Uncertainty quantification
Variance-based sensitivity analysis
Multiverse analysis
Feature selection
References
Further reading
Borgonovo, E. (2017). Sensitivity Analysis: An Introduction for the Management Scientist. International Series in Management Science and Operations Research, Springer New York.
Pilkey, O. H. and L. Pilkey-Jarvis (2007), Useless Arithmetic. Why Environmental Scientists Can't Predict the Future. New York: Columbia University Press.
Santner, T. J.; Williams, B. J.; Notz, W.I. (2003) Design and Analysis of Computer Experiments; Springer-Verlag.
Haug, Edward J.; Choi, Kyung K.; Komkov, Vadim (1986) Design sensitivity analysis of structural systems. Mathematics in Science and Engineering, 177. Academic Press, Inc., Orlando, FL.
Hall, C. A. S. and Day, J. W. (1977). Ecosystem Modeling in Theory and Practice: An Introduction with Case Histories. John Wiley & Sons, New York, NY. isbn=978-0-471-34165-9.
External links
Web site with material from SAMO conference series (1995-2025)
Simulation
Business intelligence terms
Mathematical modeling
Mathematical and quantitative methods (economics) | 0.776452 | 0.995857 | 0.773235 |
Colon classification | Colon classification (CC) is a library catalogue system developed by Shiyali Ramamrita Ranganathan. It was an early faceted (or analytico-synthetic) classification system. The first edition of colon classification was published in 1933, followed by six more editions. It is especially used in libraries in India.
Its name originates from its use of colons to separate facets into classes. Many other classification schemes, some of which are unrelated, also use colons and other punctuation to perform various functions. Originally, CC used only the colon as a separator, but since the second edition, CC has used four other punctuation symbols to identify each facet type.
In CC, facets describe "personality" (the most specific subject), matter, energy, space, and time (PMEST). These facets are generally associated with every item in a library, and thus form a reasonably universal sorting system.
As an example, the subject "research in the cure of tuberculosis of lungs by x-ray conducted in India in 1950" would be categorized as:
This is summarized in a specific call number:
Organization
The colon classification system uses 42 main classes that are combined with other letters, numbers, and marks in a manner resembling the Library of Congress Classification.
Facets
CC uses five primary categories, or facets, to specify the sorting of a publication. Collectively, they are called PMEST:
Other symbols can be used to indicate components of facets called isolates, and to specify complex combinations or relationships between disciplines.
Classes
The following are the main classes of CC, with some subclasses, the main method used to sort the subclass using the PMEST scheme and examples showing application of PMEST.
z Generalia
1 Universe of Knowledge
2 Library Science
3 Book science
4 Journalism
A Natural science
B Mathematics
B2 Algebra
C Physics
D Engineering
E Chemistry
F Technology
G Biology
H Geology
HX Mining
I Botany
J Agriculture
J1 Horticulture
J2 Feed
J3 Food
J4 Stimulant
J5 Oil
J6 Drug
J7 Fabric
J8 Dye
K Zoology
KZ Animal Husbandry
L Medicine
LZ3 Pharmacology
LZ5 Pharmacopoeia
M Useful arts
M7 Textiles [material]:[work]
Δ Spiritual experience and mysticism [religion],[entity]:[problem]
N Fine arts
ND Sculpture
NN Engraving
NQ Painting
NR Music
O Literature
P Linguistics
Q Religion
R Philosophy
S Psychology
T Education
U Geography
V History
W Political science
X Economics
Y Sociology
YZ Social Work
Z Law
Example
A common example of the colon classification is:
"Research in the cure of the tuberculosis of lungs by x-ray conducted in India in 1950s":
The main classification is Medicine;
(Medicine)
Within Medicine, the Lungs are the main concern;
The property of the Lungs is that they are afflicted with Tuberculosis;
The Tuberculosis is being performed (:) on, that is the intent is to cure (Treatment);
The matter that we are treating the Tuberculosis with is X-Rays;
And this discussion of treatment is regarding the Research phase;
This Research is performed within a geographical space (.), namely India;
During the time (') of 1950;
And finally, translating into the codes listed for each subject and facet the classification becomes
References
Further reading
Colon Classification (6th Edition) by Shiyali Ramamrita Ranganathan, published by Ess Ess Publications, Delhi, India
Chan, Lois Mai. Cataloging and Classification: An Introduction. 2nd ed. New York: McGraw-Hill, c. 1994. .
Knowledge representation
Library cataloging and classification | 0.783823 | 0.98649 | 0.773234 |
Ecological pyramid | An ecological pyramid (also trophic pyramid, Eltonian pyramid, energy pyramid, or sometimes food pyramid) is a graphical representation designed to show the biomass or bioproductivity at each trophic level in an ecosystem.
A pyramid of energy shows how much energy is retained in the form of new biomass from each trophic level, while a pyramid of biomass shows how much biomass (the amount of living or organic matter present in an organism) is present in the organisms. There is also a pyramid of numbers representing the number of individual organisms at each trophic level. Pyramids of energy are normally upright, but other pyramids can be inverted (pyramid of biomass for marine region) or take other shapes (spindle shaped pyramid).
Ecological pyramids begin with producers on the bottom (such as plants) and proceed through the various trophic levels (such as herbivores that eat plants, then carnivores that eat flesh, then omnivores that eat both plants and flesh, and so on). The highest level is the top of the food chain.
Biomass can be measured by a bomb calorimeter.
Pyramid of Energy
A pyramid of energy or pyramid of productivity shows the production or turnover (the rate at which energy or mass is transferred from one trophic level to the next) of biomass at each trophic level. Instead of showing a single snapshot in time, productivity pyramids show the flow of energy through the food chain. Typical units are grams per square meter per year or calories per square meter per year. As with the others, this graph shows producers at the bottom and higher trophic levels on top.
When an ecosystem is healthy, this graph produces a standard ecological pyramid. This is because, in order for the ecosystem to sustain itself, there must be more energy at lower trophic levels than there is at higher trophic levels. This allows organisms on the lower levels to not only maintain a stable population, but also to transfer energy up the pyramid. The exception to this generalization is when portions of a food web are supported by inputs of resources from outside the local community. In small, forested streams, for example, the volume of higher levels is greater than could be supported by the local primary production.
Energy usually enters ecosystems from the Sun. The primary producers at the base of the pyramid use solar radiation to power photosynthesis which produces food. However most wavelengths in solar radiation cannot be used for photosynthesis, so they are reflected back into space or absorbed elsewhere and converted to heat. Only 1 to 2 percent of the energy from the sun is absorbed by photosynthetic processes and converted into food. When energy is transferred to higher trophic levels, on average only about 10% is used at each level to build biomass, becoming stored energy. The rest goes to metabolic processes such as growth, respiration, and reproduction.
Advantages of the pyramid of energy as a representation:
It takes account of the rate of production over a period of time.
Two species of comparable biomass may have very different life spans. Thus, a direct comparison of their total biomasses is misleading, but their productivity is directly comparable.
The relative energy chain within an ecosystem can be compared using pyramids of energy; also different ecosystems can be compared.
There are no inverted pyramids.
The input of solar energy can be added.
Disadvantages of the pyramid of energy as a representation:
The rate of biomass production of an organism is required, which involves measuring growth and reproduction through time.
There is still the difficulty of assigning the organisms to a specific trophic level. As well as the organisms in the food chains there is the problem of assigning the decomposers and detritivores to a particular level.
Pyramid of biomass
A pyramid of biomass shows the relationship between biomass and trophic level by quantifying the biomass present at each trophic level of an ecological community at a particular time. It is a graphical representation of biomass (total amount of living or organic matter in an ecosystem) present in unit area in different trophic levels. Typical units are grams per square meter, or calories per square meter.
The pyramid of biomass may be "inverted". For example, in a pond ecosystem, the standing crop of phytoplankton, the major producers, at any given point will be lower than the mass of the heterotrophs, such as fish and insects. This is explained as the phytoplankton reproduce very quickly, but have much shorter individual lives.
Pyramid of Numbers
A pyramid of numbers shows graphically the population, or abundance, in terms of the number of individual organisms involved at each level in a food chain. This shows the number of organisms in each trophic level without any consideration for their individual sizes or biomass. The pyramid is not necessarily upright. For example, it will be inverted if beetles are feeding from the output of forest trees, or parasites are feeding on large host animals.
History
The concept of a pyramid of numbers ("Eltonian pyramid") was developed by Charles Elton (1927). Later, it would also be expressed in terms of biomass by Bodenheimer (1938). The idea of the pyramid of productivity or energy relies on the works of G. Evelyn Hutchinson and Raymond Lindeman (1942).
See also
Trophic cascade
References
Bibliography
Odum, E.P. 1971. Fundamentals of Ecology. Third Edition. W.B. Saunders Company, Philadelphia,
External links
Food Chains
Ecology
Food chains | 0.775857 | 0.996617 | 0.773232 |
Formal semantics (natural language) | Formal semantics is the study of grammatical meaning in natural languages using formal concepts from logic, mathematics and theoretical computer science. It is an interdisciplinary field, sometimes regarded as a subfield of both linguistics and philosophy of language. It provides accounts of what linguistic expressions mean and how their meanings are composed from the meanings of their parts. The enterprise of formal semantics can be thought of as that of reverse-engineering the semantic components of natural languages' grammars.
Overview
Formal semantics studies the denotations of natural language expressions. High-level concerns include compositionality, reference, and the nature of meaning. Key topic areas include scope, modality, binding, tense, and aspect. Semantics is distinct from pragmatics, which encompasses aspects of meaning which arise from interaction and communicative intent.
Formal semantics is an interdisciplinary field, often viewed as a subfield of both linguistics and philosophy, while also incorporating work from computer science, mathematical logic, and cognitive psychology. Within philosophy, formal semanticists typically adopt a Platonistic ontology and an externalist view of meaning. Within linguistics, it is more common to view formal semantics as part of the study of linguistic cognition. As a result, philosophers put more of an emphasis on conceptual issues while linguists are more likely to focus on the syntax–semantics interface and crosslinguistic variation.
Central concepts
Truth conditions
The fundamental question of formal semantics is what you know when you know how to interpret expressions of a language. A common assumption is that knowing the meaning of a sentence requires knowing its truth conditions, or in other words knowing what the world would have to be like for the sentence to be true. For instance, to know the meaning of the English sentence "Nancy smokes" one has to know that it is true when the person Nancy performs the action of smoking.
However, many current approaches to formal semantics posit that there is more to meaning than truth-conditions. In the formal semantic framework of inquisitive semantics, knowing the meaning of a sentence also requires knowing what issues (i.e. questions) it raises. For instance "Nancy smokes, but does she drink?" conveys the same truth-conditional information as the previous example but also raises an issue of whether Nancy drinks. Other approaches generalize the concept of truth conditionality or treat it as epiphenomenal. For instance in dynamic semantics, knowing the meaning of a sentence amounts to knowing how it updates a context.
Pietroski treats meanings as instructions to build concepts.
Compositionality
The Principle of Compositionality is the fundamental assumption in formal semantics. This principle states that the denotation of a complex expression is determined by the denotations of its parts along with their mode of composition. For instance, the denotation of the English sentence "Nancy smokes" is determined by the meaning of "Nancy", the denotation of "smokes", and whatever semantic operations combine the meanings of subjects with the meanings of predicates. In a simplified semantic analysis, this idea would be formalized by positing that "Nancy" denotes Nancy herself, while "smokes" denotes a function which takes some individual x as an argument and returns the truth value "true" if x indeed smokes. Assuming that the words "Nancy" and "smokes" are semantically composed via function application, this analysis would predict that the sentence as a whole is true if Nancy indeed smokes.
Phenomena
Scope
Scope can be thought of as the semantic order of operations. For instance, in the sentence "Paulina doesn't drink beer but she does drink wine," the proposition that Paulina drinks beer occurs within the scope of negation, but the proposition that Paulina drinks wine does not. One of the major concerns of research in formal semantics is the relationship between operators' syntactic positions and their semantic scope. This relationship is not transparent, since the scope of an operator need not directly correspond to its surface position and a single surface form can be semantically ambiguous between different scope construals. Some theories of scope posit a level of syntactic structure called logical form, in which an item's syntactic position corresponds to its semantic scope. Others theories compute scope relations in the semantics itself, using formal tools such as type shifters, monads, and continuations.
Binding
Binding is the phenomenon in which anaphoric elements such as pronouns are grammatically associated with their antecedents. For instance in the English sentence "Mary saw herself", the anaphor "herself" is bound by its antecedent "Mary". Binding can be licensed or blocked in certain contexts or syntactic configurations, e.g. the pronoun "her" cannot be bound by "Mary" in the English sentence "Mary saw her". While all languages have binding, restrictions on it vary even among closely related languages. Binding was a major component to the government and binding theory paradigm.
Modality
Modality is the phenomenon whereby language is used to discuss potentially non-actual scenarios. For instance, while a non-modal sentence such as "Nancy smoked" makes a claim about the actual world, modalized sentences such as "Nancy might have smoked" or "If Nancy smoked, I'll be sad" make claims about alternative scenarios. The most intensely studied expressions include modal auxiliaries such as "could", "should", or "must"; modal adverbs such as "possibly" or "necessarily"; and modal adjectives such as "conceivable" and "probable". However, modal components have been identified in the meanings of countless natural language expressions including counterfactuals, propositional attitudes, evidentials, habituals and generics. The standard treatment of linguistic modality was proposed by Angelika Kratzer in the 1970s, building on an earlier tradition of work in modal logic.
History
Formal semantics emerged as a major area of research in the early 1970s, with the pioneering work of the philosopher and logician Richard Montague. Montague proposed a formal system now known as Montague grammar which consisted of a novel syntactic formalism for English, a logical system called Intensional Logic, and a set of homomorphic translation rules linking the two. In retrospect, Montague Grammar has been compared to a Rube Goldberg machine, but it was regarded as earth-shattering when first proposed, and many of its fundamental insights survive in the various semantic models which have superseded it.
Montague Grammar was a major advance because it showed that natural languages could be treated as interpreted formal languages. Before Montague, many linguists had doubted that this was possible, and logicians of that era tended to view logic as a replacement for natural language rather than a tool for analyzing it. Montague's work was published during the Linguistics Wars, and many linguists were initially puzzled by it. While linguists wanted a restrictive theory that could only model phenomena that occur in human languages, Montague sought a flexible framework that characterized the concept of meaning at its most general. At one conference, Montague told Barbara Partee that she was "the only linguist who it is not the case that I can't talk to".
Formal semantics grew into a major subfield of linguistics in the late 1970s and early 1980s, due to the seminal work of Barbara Partee. Partee developed a linguistically plausible system which incorporated the key insights of both Montague Grammar and Transformational grammar. Early research in linguistic formal semantics used Partee's system to achieve a wealth of empirical and conceptual results. Later work by Irene Heim, Angelika Kratzer, Tanya Reinhart, Robert May and others built on Partee's work to further reconcile it with the generative approach to syntax. The resulting framework is known as the Heim and Kratzer system, after the authors of the textbook Semantics in Generative Grammar which first codified and popularized it. The Heim and Kratzer system differs from earlier approaches in that it incorporates a level of syntactic representation called logical form which undergoes semantic interpretation. Thus, this system often includes syntactic representations and operations which were introduced by translation rules in Montague's system. However, work by others such as Gerald Gazdar proposed models of the syntax-semantics interface which stayed closer to Montague's, providing a system of interpretation in which denotations could be computed on the basis of surface structures. These approaches live on in frameworks such as categorial grammar and combinatory categorial grammar.
Cognitive semantics emerged as a reaction against formal semantics, but there have been recently several attempts at reconciling both positions.
See also
Alternative semantics
Barbara Partee
Compositionality
Computational semantics
Discourse representation theory
Dynamic semantics
Frame semantics (linguistics)
Inquisitive semantics
Philosophy of language
Pragmatics
Richard Montague
Montague grammar
Traditional grammar
Syntax–semantics interface
References
Further reading
A very accessible overview of the main ideas in the field.
Chapter 10, Formal semantics, contains the best chapter-level coverage of the main technical directions
The most comprehensive reference in the area.
One of the first textbooks. Accessible to undergraduates.
Reinhard Muskens. Type-logical Semantics. Routledge Encyclopedia of Philosophy Online.
Barbara H. Partee. Reflections of a formal semanticist as of Feb 2005. Ample historical information. (An extended version of the introductory essay in Barbara H. Partee: Compositionality in Formal Semantics: Selected Papers of Barbara Partee. Blackwell Publishers, Oxford, 2004.)
Semantics
Formal semantics (natural language)
Grammar | 0.785185 | 0.984767 | 0.773224 |
Table of thermodynamic equations | Common thermodynamic equations and quantities in thermodynamics, using mathematical notation, are as follows:
Definitions
Many of the definitions below are also used in the thermodynamics of chemical reactions.
General basic quantities
General derived quantities
Thermal properties of matter
Thermal transfer
Equations
The equations in this article are classified by subject.
Thermodynamic processes
Kinetic theory
Ideal gas
Entropy
, where kB is the Boltzmann constant, and Ω denotes the volume of macrostate in the phase space or otherwise called thermodynamic probability.
, for reversible processes only
Statistical physics
Below are useful results from the Maxwell–Boltzmann distribution for an ideal gas, and the implications of the Entropy quantity. The distribution is valid for atoms or molecules constituting ideal gases.
Corollaries of the non-relativistic Maxwell–Boltzmann distribution are below.
Quasi-static and reversible processes
For quasi-static and reversible processes, the first law of thermodynamics is:
where δQ is the heat supplied to the system and δW is the work done by the system.
Thermodynamic potentials
The following energies are called the thermodynamic potentials,
and the corresponding fundamental thermodynamic relations or "master equations" are:
Maxwell's relations
The four most common Maxwell's relations are:
More relations include the following.
Other differential equations are:
Quantum properties
Indistinguishable Particles
where N is number of particles, h is that Planck constant, I is moment of inertia, and Z is the partition function, in various forms:
Thermal properties of matter
Thermal transfer
Thermal efficiencies
See also
List of thermodynamic properties
Antoine equation
Bejan number
Bowen ratio
Bridgman's equations
Clausius–Clapeyron relation
Departure functions
Duhem–Margules equation
Ehrenfest equations
Gibbs–Helmholtz equation
Phase rule
Kopp's law
Noro–Frenkel law of corresponding states
Onsager reciprocal relations
Stefan number
Thermodynamics
Timeline of thermodynamics
Triple product rule
Exact differential
References
Atkins, Peter and de Paula, Julio Physical Chemistry, 7th edition, W.H. Freeman and Company, 2002 .
Chapters 1–10, Part 1: "Equilibrium".
Landsberg, Peter T. Thermodynamics and Statistical Mechanics. New York: Dover Publications, Inc., 1990. (reprinted from Oxford University Press, 1978).
Lewis, G.N., and Randall, M., "Thermodynamics", 2nd Edition, McGraw-Hill Book Company, New York, 1961.
Reichl, L.E., A Modern Course in Statistical Physics, 2nd edition, New York: John Wiley & Sons, 1998.
Schroeder, Daniel V. Thermal Physics. San Francisco: Addison Wesley Longman, 2000 .
Silbey, Robert J., et al. Physical Chemistry, 4th ed. New Jersey: Wiley, 2004.
Callen, Herbert B. (1985). Thermodynamics and an Introduction to Themostatistics, 2nd edition, New York: John Wiley & Sons.
External links
Thermodynamic equation calculator
Thermodynamic equations
Thermodynamics
Chemical engineering | 0.779216 | 0.992305 | 0.77322 |
Organic compound | Some chemical authorities define an organic compound as a chemical compound that contains a carbon–hydrogen or carbon–carbon bond; others consider an organic compound to be any chemical compound that contains carbon. For example, carbon-containing compounds such as alkanes (e.g. methane ) and its derivatives are universally considered organic, but many others are sometimes considered inorganic, such as halides of carbon without carbon-hydrogen and carbon-carbon bonds (e.g. carbon tetrachloride ), and certain compounds of carbon with nitrogen and oxygen (e.g. cyanide ion , hydrogen cyanide , chloroformic acid , carbon dioxide , and carbonate ion ).
Due to carbon's ability to catenate (form chains with other carbon atoms), millions of organic compounds are known. The study of the properties, reactions, and syntheses of organic compounds comprise the discipline known as organic chemistry. For historical reasons, a few classes of carbon-containing compounds (e.g., carbonate salts and cyanide salts), along with a few other exceptions (e.g., carbon dioxide, and even hydrogen cyanide despite the fact it contains a carbon-hydrogen bond), are generally considered inorganic. Other than those just named, little consensus exists among chemists on precisely which carbon-containing compounds are excluded, making any rigorous definition of an organic compound elusive.
Although organic compounds make up only a small percentage of Earth's crust, they are of central importance because all known life is based on organic compounds. Living things incorporate inorganic carbon compounds into organic compounds through a network of processes (the carbon cycle) that begins with the conversion of carbon dioxide and a hydrogen source like water into simple sugars and other organic molecules by autotrophic organisms using light (photosynthesis) or other sources of energy. Most synthetically-produced organic compounds are ultimately derived from petrochemicals consisting mainly of hydrocarbons, which are themselves formed from the high pressure and temperature degradation of organic matter underground over geological timescales. This ultimate derivation notwithstanding, organic compounds are no longer defined as compounds originating in living things, as they were historically.
In chemical nomenclature, an organyl group, frequently represented by the letter R, refers to any monovalent substituent whose open valence is on a carbon atom.
Definition
For historical reasons discussed below, a few types of carbon-containing compounds, such as carbides, carbonates (excluding carbonate esters), simple oxides of carbon (for example, CO and ) and cyanides are generally considered inorganic compounds. Different forms (allotropes) of pure carbon, such as diamond, graphite, fullerenes and carbon nanotubes are also excluded because they are simple substances composed of a single element and so not generally considered chemical compounds. The word "organic" in this context does not mean "natural".
History
Vitalism
Vitalism was a widespread conception that substances found in organic nature are formed from the chemical elements by the action of a "vital force" or "life-force" (vis vitalis) that only living organisms possess.
In the 1810s, Jöns Jacob Berzelius argued that a regulative force must exist within living bodies. Berzelius also contended that compounds could be distinguished by whether they required any organisms in their synthesis (organic compounds) or whether they did not (inorganic compounds). Vitalism taught that formation of these "organic" compounds were fundamentally different from the "inorganic" compounds that could be obtained from the elements by chemical manipulations in laboratories.
Vitalism survived for a short period after the formulation of modern ideas about the atomic theory and chemical elements. It first came under question in 1824, when Friedrich Wöhler synthesized oxalic acid, a compound known to occur only in living organisms, from cyanogen. A further experiment was Wöhler's 1828 synthesis of urea from the inorganic salts potassium cyanate and ammonium sulfate. Urea had long been considered an "organic" compound, as it was known to occur only in the urine of living organisms. Wöhler's experiments were followed by many others, in which increasingly complex "organic" substances were produced from "inorganic" ones without the involvement of any living organism, thus disproving vitalism.
Modern classification and ambiguities
Although vitalism has been discredited, scientific nomenclature retains the distinction between organic and inorganic compounds. The modern meaning of organic compound is any compound that contains a significant amount of carbon—even though many of the organic compounds known today have no connection to any substance found in living organisms. The term carbogenic has been proposed by E. J. Corey as a modern alternative to organic, but this neologism remains relatively obscure.
The organic compound L-isoleucine molecule presents some features typical of organic compounds: carbon–carbon bonds, carbon–hydrogen bonds, as well as covalent bonds from carbon to oxygen and to nitrogen.
As described in detail below, any definition of organic compound that uses simple, broadly-applicable criteria turns out to be unsatisfactory, to varying degrees. The modern, commonly accepted definition of organic compound essentially amounts to any carbon-containing compound, excluding several classes of substances traditionally considered "inorganic". The list of substances so excluded varies from author to author. Still, it is generally agreed upon that there are (at least) a few carbon-containing compounds that should not be considered organic. For instance, almost all authorities would require the exclusion of alloys that contain carbon, including steel (which contains cementite, ), as well as other metal and semimetal carbides (including "ionic" carbides, e.g, and and "covalent" carbides, e.g. and SiC, and graphite intercalation compounds, e.g. ). Other compounds and materials that are considered 'inorganic' by most authorities include: metal carbonates, simple oxides of carbon (CO, , and arguably, ), the allotropes of carbon, cyanide derivatives not containing an organic residue (e.g., KCN, , BrCN, cyanate anion , etc.), and heavier analogs thereof (e.g., cyaphide anion , , COS; although carbon disulfide is often classed as an organic solvent). Halides of carbon without hydrogen (e.g., and ), phosgene, carboranes, metal carbonyls (e.g., nickel tetracarbonyl), mellitic anhydride, and other exotic oxocarbons are also considered inorganic by some authorities.
Nickel tetracarbonyl and other metal carbonyls are often volatile liquids, like many organic compounds, yet they contain only carbon bonded to a transition metal and to oxygen, and are often prepared directly from metal and carbon monoxide. Nickel tetracarbonyl is typically classified as an organometallic compound as it satisfies the broad definition that organometallic chemistry covers all compounds that contain at least one carbon to metal covalent bond; it is unknown whether organometallic compounds form a subset of organic compounds. For example, the evidence of covalent Fe-C bonding in cementite, a major component of steel, places it within this broad definition of organometallic, yet steel and other carbon-containing alloys are seldom regarded as organic compounds. Thus, it is unclear whether the definition of organometallic should be narrowed, whether these considerations imply that organometallic compounds are not necessarily organic, or both.
Metal complexes with organic ligands but no carbon-metal bonds (e.g., ) are not considered organometallic; instead, they are called metal-organic compounds (and might be considered organic).
The relatively narrow definition of organic compounds as those containing C-H bonds excludes compounds that are (historically and practically) considered organic. Neither urea nor oxalic acid are organic by this definition, yet they were two key compounds in the vitalism debate. However, the IUPAC Blue Book on organic nomenclature specifically mentions urea and oxalic acid as organic compounds. Other compounds lacking C-H bonds but traditionally considered organic include benzenehexol, mesoxalic acid, and carbon tetrachloride. Mellitic acid, which contains no C-H bonds, is considered a possible organic compound in Martian soil. Terrestrially, it, and its anhydride, mellitic anhydride, are associated with the mineral mellite.
A slightly broader definition of the organic compound includes all compounds bearing C-H or C-C bonds. This would still exclude urea. Moreover, this definition still leads to somewhat arbitrary divisions in sets of carbon-halogen compounds. For example, and would be considered by this rule to be "inorganic", whereas , , and would be organic, though these compounds share many physical and chemical properties.
Classification
Organic compounds may be classified in a variety of ways. One major distinction is between natural and synthetic compounds. Organic compounds can also be classified or subdivided by the presence of heteroatoms, e.g., organometallic compounds, which feature bonds between carbon and a metal, and organophosphorus compounds, which feature bonds between carbon and a phosphorus.
Another distinction, based on the size of organic compounds, distinguishes between small molecules and polymers.
Natural compounds
Natural compounds refer to those that are produced by plants or animals. Many of these are still extracted from natural sources because they would be more expensive to produce artificially. Examples include most sugars, some alkaloids and terpenoids, certain nutrients such as vitamin B12, and, in general, those natural products with large or stereoisometrically complicated molecules present in reasonable concentrations in living organisms.
Further compounds of prime importance in biochemistry are antigens, carbohydrates, enzymes, hormones, lipids and fatty acids, neurotransmitters, nucleic acids, proteins, peptides and amino acids, lectins, vitamins, and fats and oils.
Synthetic compounds
Compounds that are prepared by reaction of other compounds are known as "synthetic". They may be either compounds that are already found in plants/animals or those artificial compounds that do not occur naturally.
Most polymers (a category that includes all plastics and rubbers) are organic synthetic or semi-synthetic compounds.
Biotechnology
Many organic compounds—two examples are ethanol and insulin—are manufactured industrially using organisms such as bacteria and yeast. Typically, the DNA of an organism is altered to express compounds not ordinarily produced by the organism. Many such biotechnology-engineered compounds did not previously exist in nature.
Databases
The CAS database is the most comprehensive repository for data on organic compounds. The search tool SciFinder is offered.
The Beilstein database contains information on 9.8 million substances, covers the scientific literature from 1771 to the present, and is today accessible via Reaxys. Structures and a large diversity of physical and chemical properties are available for each substance, with reference to original literature.
PubChem contains 18.4 million entries on compounds and especially covers the field of medicinal chemistry.
A great number of more specialized databases exist for diverse branches of organic chemistry.
Structure determination
The main tools are proton and carbon-13 NMR spectroscopy, IR Spectroscopy, Mass spectrometry, UV/Vis Spectroscopy and X-ray crystallography.
See also
List of chemical compounds
List of organic compounds
References
External links
Organic Compounds Database
Organic Materials Database
Organic chemistry | 0.774219 | 0.998702 | 0.773214 |
Structural analog | A structural analog, also known as a chemical analog or simply an analog, is a compound having a structure similar to that of another compound, but differing from it in respect to a certain component.
It can differ in one or more atoms, functional groups, or substructures, which are replaced with other atoms, groups, or substructures. A structural analog can be imagined to be formed, at least theoretically, from the other compound. Structural analogs are often isoelectronic.
Despite a high chemical similarity, structural analogs are not necessarily functional analogs and can have very different physical, chemical, biochemical, or pharmacological properties.
In drug discovery, either a large series of structural analogs of an initial lead compound are created and tested as part of a structure–activity relationship study or a database is screened for structural analogs of a lead compound.
Chemical analogues of illegal drugs are developed and sold in order to circumvent laws. Such substances are often called designer drugs. Because of this, the United States passed the Federal Analogue Act in 1986. This bill banned the production of any chemical analogue of a Schedule I or Schedule II substance that has substantially similar pharmacological effects, with the intent of human consumption.
Examples
Neurotransmitter analog
A neurotransmitter analog is a structural analogue of a neurotransmitter, typically a drug. Some examples include:
Catecholamine analogue
Serotonin analogue
GABA analogue
See also
Derivative (chemistry)
Federal Analogue Act, a United States bill banning chemical analogues of illegal drugs
Functional analog, compounds with similar physical, chemical, biochemical, or pharmacological properties
Homolog, a compound of a series differing only by repeated units
Transition state analog
References
External links
Analoging in ChEMBL, DrugBank and the Connectivity Map – a free web-service for finding structural analogs in ChEMBL, DrugBank, and the Connectivity Map
Chemical nomenclature | 0.782341 | 0.988315 | 0.773199 |
Thermogravimetric analysis | Thermogravimetric analysis or thermal gravimetric analysis (TGA) is a method of thermal analysis in which the mass of a sample is measured over time as the temperature changes. This measurement provides information about physical phenomena, such as phase transitions, absorption, adsorption and desorption; as well as chemical phenomena including chemisorptions, thermal decomposition, and solid-gas reactions (e.g., oxidation or reduction).
Thermogravimetric analyzer
Thermogravimetric analysis (TGA) is conducted on an instrument referred to as a thermogravimetric analyzer. A thermogravimetric analyzer continuously measures mass while the temperature of a sample is changed over time. Mass, temperature, and time are considered base measurements in thermogravimetric analysis while many additional measures may be derived from these three base measurements.
A typical thermogravimetric analyzer consists of a precision balance with a sample pan located inside a furnace with a programmable control temperature. The temperature is generally increased at constant rate (or for some applications the temperature is controlled for a constant mass loss) to incur a thermal reaction. The thermal reaction may occur under a variety of atmospheres including: ambient air, vacuum, inert gas, oxidizing/reducing gases, corrosive gases, carburizing gases, vapors of liquids or "self-generated atmosphere"; as well as a variety of pressures including: a high vacuum, high pressure, constant pressure, or a controlled pressure.
The thermogravimetric data collected from a thermal reaction is compiled into a plot of mass or percentage of initial mass on the y axis versus either temperature or time on the x-axis. This plot, which is often smoothed, is referred to as a TGA curve. The first derivative of the TGA curve (the DTG curve) may be plotted to determine inflection points useful for in-depth interpretations as well as differential thermal analysis.
A TGA can be used for materials characterization through analysis of characteristic decomposition patterns. It is an especially useful technique for the study of polymeric materials, including thermoplastics, thermosets, elastomers, composites, plastic films, fibers, coatings, paints, and fuels.
Types of TGA
There are three types of thermogravimetry:
Isothermal or static thermogravimetry: In this technique, the sample weight is recorded as a function of time at a constant temperature.
Quasistatic thermogravimetry: In this technique, the sample temperature is raised in sequential steps separated by isothermal intervals, during which the sample mass reaches stability before the start of the next temperature ramp.
Dynamic thermogravimetry: In this technique, the sample is heated in an environment whose temperature is changed in a linear manner.
Applications
Thermal stability
TGA can be used to evaluate the thermal stability of a material. In a desired temperature range, if a species is thermally stable, there will be no observed mass change. Negligible mass loss corresponds to little or no slope in the TGA trace. TGA also gives the upper use temperature of a material. Beyond this temperature the material will begin to degrade.
TGA is used in the analysis of polymers. Polymers usually melt before they decompose, thus TGA is mainly used to investigate the thermal stability of polymers. Most polymers melt or degrade before 200 °C. However, there is a class of thermally stable polymers that are able to withstand temperatures of at least 300 °C in air and 500 °C in inert gases without structural changes or strength loss, which can be analyzed by TGA.
Oxidation and combustion
The simplest materials characterization is the residue remaining after a reaction. For example, a combustion reaction could be tested by loading a sample into a thermogravimetric analyzer at normal conditions. The thermogravimetric analyzer would cause ion combustion in the sample by heating it beyond its ignition temperature. The resultant TGA curve plotted with the y-axis as a percentage of initial mass would show the residue at the final point of the curve.
Oxidative mass losses are the most common observable losses in TGA.
Studying the resistance to oxidation in copper alloys is very important. For example, NASA (National Aeronautics and Space Administration) is conducting research on advanced copper alloys for their possible use in combustion engines. However, oxidative degradation can occur in these alloys as copper oxides form in atmospheres that are rich in oxygen. Resistance to oxidation is significant because NASA wants to be able to reuse shuttle materials. TGA can be used to study the static oxidation of materials such as these for practical use.
Combustion during TG analysis is identifiable by distinct traces made in the TGA thermograms produced. One interesting example occurs with samples of as-produced unpurified carbon nanotubes that have a large amount of metal catalyst present. Due to combustion, a TGA trace can deviate from the normal form of a well-behaved function. This phenomenon arises from a rapid temperature change. When the weight and temperature are plotted versus time, a dramatic slope change in the first derivative plot is concurrent with the mass loss of the sample and the sudden increase in temperature seen by the thermocouple. The mass loss could result from particles of smoke released from burning caused by inconsistencies in the material itself, beyond the oxidation of carbon due to poorly controlled weight loss.
Different weight losses on the same sample at different points can also be used as a diagnosis of the sample's anisotropy. For instance, sampling the top side and the bottom side of a sample with dispersed particles inside can be useful to detect sedimentation, as thermograms will not overlap but will show a gap between them if the particle distribution is different from side to side.
Thermogravimetric kinetics
Thermogravimetric kinetics may be explored for insight into the reaction mechanisms of thermal (catalytic or non-catalytic) decomposition involved in the pyrolysis and combustion processes of different materials.
Activation energies of the decomposition process can be calculated using Kissinger method.
Though a constant heating rate is more common, a constant mass loss rate can illuminate specific reaction kinetics. For example, the kinetic parameters of the carbonization of polyvinyl butyral were found using a constant mass loss rate of 0.2 wt %/min.
Operation in combination with other instruments
Thermogravimetric analysis is often combined with other processes or used in conjunction with other analytical methods.
For example, the TGA instrument continuously weighs a sample as it is heated to temperatures of up to 2000 °C for coupling with Fourier-transform infrared spectroscopy (FTIR) and mass spectrometry gas analysis. As the temperature increases, various components of the sample are decomposed and the weight percentage of each resulting mass change can be measured.
References
Thermodynamics
Materials science
Analytical chemistry | 0.777995 | 0.99382 | 0.773187 |
Chemotroph | A chemotroph is an organism that obtains energy by the oxidation of electron donors in their environments. These molecules can be organic (chemoorganotrophs) or inorganic (chemolithotrophs). The chemotroph designation is in contrast to phototrophs, which use photons. Chemotrophs can be either autotrophic or heterotrophic. Chemotrophs can be found in areas where electron donors are present in high concentration, for instance around hydrothermal vents.
Chemoautotroph
Chemoautotrophs are autotrophic organisms that can rely on chemosynthesis, i.e. deriving biological energy from chemical reactions of environmental inorganic substrates and synthesizing all necessary organic compounds from carbon dioxide. Chemoautotrophs can use inorganic energy sources such as hydrogen sulfide, elemental sulfur, ferrous iron, molecular hydrogen, and ammonia or organic sources to produce energy. Most chemoautotrophs are prokaryotic extremophiles, bacteria, or archaea that live in otherwise hostile environments (such as deep sea vents) and are the primary producers in such ecosystems. Chemoautotrophs generally fall into several groups: methanogens, sulfur oxidizers and reducers, nitrifiers, anammox bacteria, and thermoacidophiles. An example of one of these prokaryotes would be Sulfolobus. Chemolithotrophic growth can be dramatically fast, such as Hydrogenovibrio crunogenus with a doubling time around one hour.
The term "chemosynthesis", coined in 1897 by Wilhelm Pfeffer, originally was defined as the energy production by oxidation of inorganic substances in association with autotrophy — what would be named today as chemolithoautotrophy. Later, the term would include also the chemoorganoautotrophy, that is, it can be seen as a synonym of chemoautotrophy.
Chemoheterotroph
Chemoheterotrophs (or chemotrophic heterotrophs) are unable to fix carbon to form their own organic compounds. Chemoheterotrophs can be chemolithoheterotrophs, utilizing inorganic electron sources such as sulfur, or, much more commonly, chemoorganoheterotrophs, utilizing organic electron sources such as carbohydrates, lipids, and proteins. Most animals and fungi are examples of chemoheterotrophs, as are halophiles.
Iron- and manganese-oxidizing bacteria
Iron-oxidizing bacteria are chemotrophic bacteria that derive energy by oxidizing dissolved ferrous iron. They are known to grow and proliferate in waters containing iron concentrations as low as 0.1 mg/L. However, at least 0.3 ppm of dissolved oxygen is needed to carry out the oxidation.
Iron has many existing roles in biology not related to redox reactions; examples include iron–sulfur proteins, hemoglobin, and coordination complexes. Iron has a widespread distribution globally and is considered one of the most abundant in the Earth's crust, soil, and sediments. Iron is a trace element in marine environments. Its role as the electron donor for some chemolithotrophs is probably very ancient.
See also
Chemosynthesis
Lithotroph
Methanogen (feeds on hydrogen)
Methanotroph
RISE project – expedition that discovered high-temperature vent communities
Notes
References
1. Katrina Edwards. Microbiology of a Sediment Pond and the Underlying Young, Cold, Hydrologically Active Ridge Flank. Woods Hole Oceanographic Institution.
2. Coupled Photochemical and Enzymatic Mn(II) Oxidation Pathways of a Planktonic Roseobacter-Like Bacterium. Colleen M. Hansel and Chris A. Francis* Department of Geological and Environmental Sciences, Stanford University, Stanford, California 94305-2115. Received 28 September 2005. Accepted 17 February 2006.
Biology terminology
Microbial growth and nutrition
Planktology
Trophic ecology | 0.779231 | 0.992217 | 0.773167 |
Ontology (information science) | In information science, an ontology encompasses a representation, formal naming, and definitions of the categories, properties, and relations between the concepts, data, or entities that pertain to one, many, or all domains of discourse. More simply, an ontology is a way of showing the properties of a subject area and how they are related, by defining a set of terms and relational expressions that represent the entities in that subject area. The field which studies ontologies so conceived is sometimes referred to as applied ontology.
Every academic discipline or field, in creating its terminology, thereby lays the groundwork for an ontology. Each uses ontological assumptions to frame explicit theories, research and applications. Improved ontologies may improve problem solving within that domain, interoperability of data systems, and discoverability of data. Translating research papers within every field is a problem made easier when experts from different countries maintain a controlled vocabulary of jargon between each of their languages. For instance, the definition and ontology of economics is a primary concern in Marxist economics, but also in other subfields of economics. An example of economics relying on information science occurs in cases where a simulation or model is intended to enable economic decisions, such as determining what capital assets are at risk and by how much (see risk management).
What ontologies in both information science and philosophy have in common is the attempt to represent entities, including both objects and events, with all their interdependent properties and relations, according to a system of categories. In both fields, there is considerable work on problems of ontology engineering (e.g., Quine and Kripke in philosophy, Sowa and Guarino in information science), and debates concerning to what extent normative ontology is possible (e.g., foundationalism and coherentism in philosophy, BFO and Cyc in artificial intelligence).
Applied ontology is considered by some as a successor to prior work in philosophy. However many current efforts are more concerned with establishing controlled vocabularies of narrow domains than with philosophical first principles, or with questions such as the mode of existence of fixed essences or whether enduring objects (e.g., perdurantism and endurantism) may be ontologically more primary than processes. Artificial intelligence has retained considerable attention regarding applied ontology in subfields like natural language processing within machine translation and knowledge representation, but ontology editors are being used often in a range of fields, including biomedical informatics, industry. Such efforts often use ontology editing tools such as Protégé.
Ontology in Philosophy
Ontology is a branch of philosophy and intersects areas such as metaphysics, epistemology, and philosophy of language, as it considers how knowledge, language, and perception relate to the nature of reality. Metaphysics deals with questions like "what exists?" and "what is the nature of reality?". One of five traditional branches of philosophy, metaphysics is concerned with exploring existence through properties, entities and relations such as those between particulars and universals, intrinsic and extrinsic properties, or essence and existence. Metaphysics has been an ongoing topic of discussion since recorded history.
Etymology
The compound word ontology combines onto-, from the Greek ὄν, on (gen. ὄντος, ontos), i.e. "being; that which is", which is the present participle of the verb εἰμί, eimí, i.e. "to be, I am", and -λογία, -logia, i.e. "logical discourse", see classical compounds for this type of word formation.
While the etymology is Greek, the oldest extant record of the word itself, the Neo-Latin form ontologia, appeared in 1606 in the work Ogdoas Scholastica by Jacob Lorhard (Lorhardus) and in 1613 in the Lexicon philosophicum by Rudolf Göckel (Goclenius).
The first occurrence in English of ontology as recorded by the OED (Oxford English Dictionary, online edition, 2008) came in Archeologia Philosophica Nova or New Principles of Philosophy by Gideon Harvey.
Formal Ontology
Since the mid-1970s, researchers in the field of artificial intelligence (AI) have recognized that knowledge engineering is the key to building large and powerful AI systems. AI researchers argued that they could create new ontologies as computational models that enable certain kinds of automated reasoning, which was only marginally successful. In the 1980s, the AI community began to use the term ontology to refer to both a theory of a modeled world and a component of knowledge-based systems. In particular, David Powers introduced the word ontology to AI to refer to real world or robotic grounding, publishing in 1990 literature reviews emphasizing grounded ontology in association with the call for papers for a AAAI Summer Symposium Machine Learning of Natural Language and Ontology, with an expanded version published in SIGART Bulletin and included as a preface to the proceedings. Some researchers, drawing inspiration from philosophical ontologies, viewed computational ontology as a kind of applied philosophy.
In 1993, the widely cited web page and paper "Toward Principles for the Design of Ontologies Used for Knowledge Sharing" by Tom Gruber used ontology as a technical term in computer science closely related to earlier idea of semantic networks and taxonomies. Gruber introduced the term as a specification of a conceptualization: An ontology is a description (like a formal specification of a program) of the concepts and relationships that can formally exist for an agent or a community of agents. This definition is consistent with the usage of ontology as set of concept definitions, but more general. And it is a different sense of the word than its use in philosophy.
Attempting to distance ontologies from taxonomies and similar efforts in knowledge modeling that rely on classes and inheritance, Gruber stated (1993): Ontologies are often equated with taxonomic hierarchies of classes, class definitions, and the subsumption relation, but ontologies need not be limited to these forms. Ontologies are also not limited to conservative definitions – that is, definitions in the traditional logic sense that only introduce terminology and do not add any knowledge about the world. To specify a conceptualization, one needs to state axioms that do constrain the possible interpretations for the defined terms.
As refinement of Gruber's definition Feilmayr and Wöß (2016) stated: "An ontology is a formal, explicit specification of a shared conceptualization that is characterized by high semantic expressiveness required for increased complexity."
Formal Ontology Components
Contemporary ontologies share many structural similarities, regardless of the language in which they are expressed. Most ontologies describe individuals (instances), classes (concepts), attributes and relations.
Types
Domain ontology
A domain ontology (or domain-specific ontology) represents concepts which belong to a realm of the world, such as biology or politics. Each domain ontology typically models domain-specific definitions of terms. For example, the word card has many different meanings. An ontology about the domain of poker would model the "playing card" meaning of the word, while an ontology about the domain of computer hardware would model the "punched card" and "video card" meanings.
Since domain ontologies are written by different people, they represent concepts in very specific and unique ways, and are often incompatible within the same project. As systems that rely on domain ontologies expand, they often need to merge domain ontologies by hand-tuning each entity or using a combination of software merging and hand-tuning. This presents a challenge to the ontology designer. Different ontologies in the same domain arise due to different languages, different intended usage of the ontologies, and different perceptions of the domain (based on cultural background, education, ideology, etc.).
At present, merging ontologies that are not developed from a common upper ontology is a largely manual process and therefore time-consuming and expensive. Domain ontologies that use the same upper ontology to provide a set of basic elements with which to specify the meanings of the domain ontology entities can be merged with less effort. There are studies on generalized techniques for merging ontologies, but this area of research is still ongoing, and it is a recent event to see the issue sidestepped by having multiple domain ontologies using the same upper ontology like the OBO Foundry.
Upper ontology
An upper ontology (or foundation ontology) is a model of the commonly shared relations and objects that are generally applicable across a wide range of domain ontologies. It usually employs a core glossary that overarches the terms and associated object descriptions as they are used in various relevant domain ontologies.
Standardized upper ontologies available for use include BFO, BORO method, Dublin Core, GFO, Cyc, SUMO, UMBEL, and DOLCE. WordNet has been considered an upper ontology by some and has been used as a linguistic tool for learning domain ontologies.
Hybrid ontology
The Gellish ontology is an example of a combination of an upper and a domain ontology.
Visualization
A survey of ontology visualization methods is presented by Katifori et al. An updated survey of ontology visualization methods and tools was published by Dudás et al. The most established ontology visualization methods, namely indented tree and graph visualization are evaluated by Fu et al. A visual language for ontologies represented in OWL is specified by the Visual Notation for OWL Ontologies (VOWL).
Engineering
Ontology engineering (also called ontology building) is a set of tasks related to the development of ontologies for a particular domain. It is a subfield of knowledge engineering that studies the ontology development process, the ontology life cycle, the methods and methodologies for building ontologies, and the tools and languages that support them.
Ontology engineering aims to make explicit the knowledge contained in software applications, and organizational procedures for a particular domain. Ontology engineering offers a direction for overcoming semantic obstacles, such as those related to the definitions of business terms and software classes. Known challenges with ontology engineering include:
Ensuring the ontology is current with domain knowledge and term use
Providing sufficient specificity and concept coverage for the domain of interest, thus minimizing the content completeness problem
Ensuring the ontology can support its use cases
Editors
Ontology editors are applications designed to assist in the creation or manipulation of ontologies. It is common for ontology editors to use one or more ontology languages.
Aspects of ontology editors include: visual navigation possibilities within the knowledge model, inference engines and information extraction; support for modules; the import and export of foreign knowledge representation languages for ontology matching; and the support of meta-ontologies such as OWL-S, Dublin Core, etc.
Learning
Ontology learning is the automatic or semi-automatic creation of ontologies, including extracting a domain's terms from natural language text. As building ontologies manually is extremely labor-intensive and time-consuming, there is great motivation to automate the process. Information extraction and text mining have been explored to automatically link ontologies to documents, for example in the context of the BioCreative challenges.
Research
Epistemological assumptions, which in research asks "What do you know? or "How do you know it?", creates the foundation researchers use when approaching a certain topic or area for potential research. As epistemology is directly linked to knowledge and how we come about accepting certain truths, individuals conducting academic research must understand what allows them to begin theory building. Simply, epistemological assumptions force researchers to question how they arrive at the knowledge they have.
Languages
An ontology language is a formal language used to encode an ontology. There are a number of such languages for ontologies, both proprietary and standards-based:
Common Algebraic Specification Language is a general logic-based specification language developed within the IFIP working group 1.3 "Foundations of System Specifications" and is a de facto standard language for software specifications. It is now being applied to ontology specifications in order to provide modularity and structuring mechanisms.
Common logic is ISO standard 24707, a specification of a family of ontology languages that can be accurately translated into each other.
The Cyc project has its own ontology language called CycL, based on first-order predicate calculus with some higher-order extensions.
DOGMA (Developing Ontology-Grounded Methods and Applications) adopts the fact-oriented modeling approach to provide a higher level of semantic stability.
The Gellish language includes rules for its own extension and thus integrates an ontology with an ontology language.
IDEF5 is a software engineering method to develop and maintain usable, accurate, domain ontologies.
KIF is a syntax for first-order logic that is based on S-expressions. SUO-KIF is a derivative version supporting the Suggested Upper Merged Ontology.
MOF and UML are standards of the OMG
Olog is a category theoretic approach to ontologies, emphasizing translations between ontologies using functors.
OBO, a language used for biological and biomedical ontologies.
OntoUML is an ontologically well-founded profile of UML for conceptual modeling of domain ontologies.
OWL is a language for making ontological statements, developed as a follow-on from RDF and RDFS, as well as earlier ontology language projects including OIL, DAML, and DAML+OIL. OWL is intended to be used over the World Wide Web, and all its elements (classes, properties and individuals) are defined as RDF resources, and identified by URIs.
Rule Interchange Format (RIF) and F-Logic combine ontologies and rules.
Semantic Application Design Language (SADL) captures a subset of the expressiveness of OWL, using an English-like language entered via an Eclipse Plug-in.
SBVR (Semantics of Business Vocabularies and Rules) is an OMG standard adopted in industry to build ontologies.
TOVE Project, TOronto Virtual Enterprise project
Published examples
Arabic Ontology, a linguistic ontology for Arabic, which can be used as an Arabic Wordnet but with ontologically-clean content.
AURUM – Information Security Ontology, An ontology for information security knowledge sharing, enabling users to collaboratively understand and extend the domain knowledge body. It may serve as a basis for automated information security risk and compliance management.
BabelNet, a very large multilingual semantic network and ontology, lexicalized in many languages
Basic Formal Ontology, a formal upper ontology designed to support scientific research
BioPAX, an ontology for the exchange and interoperability of biological pathway (cellular processes) data
BMO, an e-Business Model Ontology based on a review of enterprise ontologies and business model literature
SSBMO, a Strongly Sustainable Business Model Ontology based on a review of the systems based natural and social science literature (including business). Includes critique of and significant extensions to the Business Model Ontology (BMO).
CCO and GexKB, Application Ontologies (APO) that integrate diverse types of knowledge with the Cell Cycle Ontology (CCO) and the Gene Expression Knowledge Base (GexKB)
CContology (Customer Complaint Ontology), an e-business ontology to support online customer complaint management
CIDOC Conceptual Reference Model, an ontology for cultural heritage
COSMO, a Foundation Ontology (current version in OWL) that is designed to contain representations of all of the primitive concepts needed to logically specify the meanings of any domain entity. It is intended to serve as a basic ontology that can be used to translate among the representations in other ontologies or databases. It started as a merger of the basic elements of the OpenCyc and SUMO ontologies, and has been supplemented with other ontology elements (types, relations) so as to include representations of all of the words in the Longman dictionary defining vocabulary.
Computer Science Ontology, an automatically generated ontology of research topics in the field of computer science
Cyc, a large Foundation Ontology for formal representation of the universe of discourse
Disease Ontology, designed to facilitate the mapping of diseases and associated conditions to particular medical codes
DOLCE, a Descriptive Ontology for Linguistic and Cognitive Engineering
Drammar, ontology of drama
Dublin Core, a simple ontology for documents and publishing
Financial Industry Business Ontology (FIBO), a business conceptual ontology for the financial industry
Foundational, Core and Linguistic Ontologies
Foundational Model of Anatomy, an ontology for human anatomy
Friend of a Friend, an ontology for describing persons, their activities and their relations to other people and objects
Gene Ontology for genomics
Gellish English dictionary, an ontology that includes a dictionary and taxonomy that includes an upper ontology and a lower ontology that focusses on industrial and business applications in engineering, technology and procurement.
Geopolitical ontology, an ontology describing geopolitical information created by Food and Agriculture Organization(FAO). The geopolitical ontology includes names in multiple languages (English, French, Spanish, Arabic, Chinese, Russian and Italian); maps standard coding systems (UN, ISO, FAOSTAT, AGROVOC, etc.); provides relations among territories (land borders, group membership, etc.); and tracks historical changes. In addition, FAO provides web services of geopolitical ontology and a module maker to download modules of the geopolitical ontology into different formats (RDF, XML, and EXCEL). See more information at FAO Country Profiles.
GAO (General Automotive Ontology) – an ontology for the automotive industry that includes 'car' extensions
GOLD, General Ontology for Linguistic Description
GUM (Generalized Upper Model), a linguistically motivated ontology for mediating between clients systems and natural language technology
IDEAS Group, a formal ontology for enterprise architecture being developed by the Australian, Canadian, UK and U.S. Defence Depts.
Linkbase, a formal representation of the biomedical domain, founded upon Basic Formal Ontology.
LPL, Landmark Pattern Language
NCBO Bioportal, biological and biomedical ontologies and associated tools to search, browse and visualise
NIFSTD Ontologies from the Neuroscience Information Framework: a modular set of ontologies for the neuroscience domain.
OBO-Edit, an ontology browser for most of the Open Biological and Biomedical Ontologies
OBO Foundry, a suite of interoperable reference ontologies in biology and biomedicine
OMNIBUS Ontology, an ontology of learning, instruction, and instructional design
Ontology for Biomedical Investigations, an open-access, integrated ontology of biological and clinical investigations
ONSTR, Ontology for Newborn Screening Follow-up and Translational Research, Newborn Screening Follow-up Data Integration Collaborative, Emory University, Atlanta.
Plant Ontology for plant structures and growth/development stages, etc.
POPE, Purdue Ontology for Pharmaceutical Engineering
PRO, the Protein Ontology of the Protein Information Resource, Georgetown University
ProbOnto, knowledge base and ontology of probability distributions.
Program abstraction taxonomy
Protein Ontology for proteomics
RXNO Ontology, for name reactions in chemistry
SCDO, the Sickle Cell Disease Ontology, facilitates data sharing and collaborations within the SDC community, amongst other applications (see list on SCDO website).
Schema.org, for embedding structured data into web pages, primarily for the benefit of search engines
Sequence Ontology, for representing genomic feature types found on biological sequences
SNOMED CT (Systematized Nomenclature of Medicine – Clinical Terms)
Suggested Upper Merged Ontology, a formal upper ontology
Systems Biology Ontology (SBO), for computational models in biology
SWEET, Semantic Web for Earth and Environmental Terminology
SSN/SOSA, The Semantic Sensor Network Ontology (SSN) and Sensor, Observation, Sample, and Actuator Ontology (SOSA) are W3C Recommendation and OGC Standards for describing sensors and their observations.
ThoughtTreasure ontology
TIME-ITEM, Topics for Indexing Medical Education
Uberon, representing animal anatomical structures
UMBEL, a lightweight reference structure of 20,000 subject concept classes and their relationships derived from OpenCyc
WordNet, a lexical reference system
YAMATO, Yet Another More Advanced Top-level Ontology
YSO – General Finnish Ontology
The W3C Linking Open Data community project coordinates attempts to converge different ontologies into worldwide Semantic Web.
Libraries
The development of ontologies has led to the emergence of services providing lists or directories of ontologies called ontology libraries.
The following are libraries of human-selected ontologies.
COLORE is an open repository of first-order ontologies in Common Logic with formal links between ontologies in the repository.
DAML Ontology Library maintains a legacy of ontologies in DAML.
Ontology Design Patterns portal is a wiki repository of reusable components and practices for ontology design, and also maintains a list of exemplary ontologies.
Protégé Ontology Library contains a set of OWL, Frame-based and other format ontologies.
SchemaWeb is a directory of RDF schemata expressed in RDFS, OWL and DAML+OIL.
The following are both directories and search engines.
OBO Foundry is a suite of interoperable reference ontologies in biology and biomedicine.
Bioportal (ontology repository of NCBO)
Linked Open Vocabularies
OntoSelect Ontology Library offers similar services for RDF/S, DAML and OWL ontologies.
Ontaria is a "searchable and browsable directory of semantic web data" with a focus on RDF vocabularies with OWL ontologies. (NB Project "on hold" since 2004).
Swoogle is a directory and search engine for all RDF resources available on the Web, including ontologies.
Open Ontology Repository initiative
ROMULUS is a foundational ontology repository aimed at improving semantic interoperability. Currently there are three foundational ontologies in the repository: DOLCE, BFO and GFO.
Examples of applications
In general, ontologies can be used beneficially in several fields.
Enterprise applications. A more concrete example is SAPPHIRE (Health care) or Situational Awareness and Preparedness for Public Health Incidences and Reasoning Engines which is a semantics-based health information system capable of tracking and evaluating situations and occurrences that may affect public health.
Geographic information systems bring together data from different sources and benefit therefore from ontological metadata which helps to connect the semantics of the data.
Domain-specific ontologies are extremely important in biomedical research, which requires named entity disambiguation of various biomedical terms and abbreviations that have the same string of characters but represent different biomedical concepts. For example, CSF can represent Colony Stimulating Factor or Cerebral Spinal Fluid, both of which are represented by the same term, CSF, in biomedical literature. This is why a large number of public ontologies are related to the life sciences. Life science data science tools that fail to implement these types of biomedical ontologies will not be able to accurately determine causal relationships between concepts.
See also
Commonsense knowledge bases
Concept map
Controlled vocabulary
Classification scheme (information science)
Folksonomy
Formal concept analysis
Formal ontology
General Concept Lattice
Knowledge graph
Lattice
Ontology
Ontology alignment
Ontology chart
Open Semantic Framework
Semantic technology
Soft ontology
Terminology extraction
Weak ontology
Web Ontology Language
Related philosophical concepts
Alphabet of human thought
Characteristica universalis
Interoperability
Level of measurement
Metalanguage
Natural semantic metalanguage
References
Further reading
External links
Knowledge Representation at Open Directory Project
Library of ontologies (Archive, Unmaintained)
GoPubMed using Ontologies for searching
ONTOLOG (a.k.a. "Ontolog Forum") - an Open, International, Virtual Community of Practice on Ontology, Ontological Engineering and Semantic Technology
Use of Ontologies in Natural Language Processing
Ontology Summit - an annual series of events (first started in 2006) that involves the ontology community and communities related to each year's theme chosen for the summit.
Standardization of Ontologies
Knowledge engineering
Technical communication
Information science
Semantic Web
Knowledge representation
Knowledge bases
Ontology editors | 0.775551 | 0.996876 | 0.773127 |
Molecular diffusion | Molecular diffusion, often simply called diffusion, is the thermal motion of all (liquid or gas) particles at temperatures above absolute zero. The rate of this movement is a function of temperature, viscosity of the fluid and the size (mass) of the particles. Diffusion explains the net flux of molecules from a region of higher concentration to one of lower concentration. Once the concentrations are equal the molecules continue to move, but since there is no concentration gradient the process of molecular diffusion has ceased and is instead governed by the process of self-diffusion, originating from the random motion of the molecules. The result of diffusion is a gradual mixing of material such that the distribution of molecules is uniform. Since the molecules are still in motion, but an equilibrium has been established, the result of molecular diffusion is called a "dynamic equilibrium". In a phase with uniform temperature, absent external net forces acting on the particles, the diffusion process will eventually result in complete mixing.
Consider two systems; S1 and S2 at the same temperature and capable of exchanging particles. If there is a change in the potential energy of a system; for example μ1>μ2 (μ is Chemical potential) an energy flow will occur from S1 to S2, because nature always prefers low energy and maximum entropy.
Molecular diffusion is typically described mathematically using Fick's laws of diffusion.
Applications
Diffusion is of fundamental importance in many disciplines of physics, chemistry, and biology. Some example applications of diffusion:
Sintering to produce solid materials (powder metallurgy, production of ceramics)
Chemical reactor design
Catalyst design in chemical industry
Steel can be diffused (e.g., with carbon or nitrogen) to modify its properties
Doping during production of semiconductors.
Significance
Diffusion is part of the transport phenomena. Of mass transport mechanisms, molecular diffusion is known as a slower one.
Biology
In cell biology, diffusion is a main form of transport for necessary materials such as amino acids within cells. Diffusion of solvents, such as water, through a semipermeable membrane is classified as osmosis.
Metabolism and respiration rely in part upon diffusion in addition to bulk or active processes. For example, in the alveoli of mammalian lungs, due to differences in partial pressures across the alveolar-capillary membrane, oxygen diffuses into the blood and carbon dioxide diffuses out. Lungs contain a large surface area to facilitate this gas exchange process.
Tracer, self- and chemical diffusion
Fundamentally, two types of diffusion are distinguished:
Tracer diffusion and Self-diffusion, which is a spontaneous mixing of molecules taking place in the absence of concentration (or chemical potential) gradient. This type of diffusion can be followed using isotopic tracers, hence the name. The tracer diffusion is usually assumed to be identical to self-diffusion (assuming no significant isotopic effect). This diffusion can take place under equilibrium. An excellent method for the measurement of self-diffusion coefficients is pulsed field gradient (PFG) NMR, where no isotopic tracers are needed. In a so-called NMR spin echo experiment this technique uses the nuclear spin precession phase, allowing to distinguish chemically and physically completely identical species e.g. in the liquid phase, as for example water molecules within liquid water. The self-diffusion coefficient of water has been experimentally determined with high accuracy and thus serves often as a reference value for measurements on other liquids. The self-diffusion coefficient of neat water is: 2.299·10−9 m2·s−1 at 25 °C and 1.261·10−9 m2·s−1 at 4 °C.
Chemical diffusion occurs in a presence of concentration (or chemical potential) gradient and it results in net transport of mass. This is the process described by the diffusion equation. This diffusion is always a non-equilibrium process, increases the system entropy, and brings the system closer to equilibrium.
The diffusion coefficients for these two types of diffusion are generally different because the diffusion coefficient for chemical diffusion is binary and it includes the effects due to the correlation of the movement of the different diffusing species.
Non-equilibrium system
Because chemical diffusion is a net transport process, the system in which it takes place is not an equilibrium system (i.e. it is not at rest yet). Many results in classical thermodynamics are not easily applied to non-equilibrium systems. However, there sometimes occur so-called quasi-steady states, where the diffusion process does not change in time, where classical results may locally apply. As the name suggests, this process is a not a true equilibrium since the system is still evolving.
Non-equilibrium fluid systems can be successfully modeled with Landau-Lifshitz fluctuating hydrodynamics. In this theoretical framework, diffusion is due to fluctuations whose dimensions range from the molecular scale to the macroscopic scale.
Chemical diffusion increases the entropy of a system, i.e. diffusion is a spontaneous and irreversible process. Particles can spread out by diffusion, but will not spontaneously re-order themselves (absent changes to the system, assuming no creation of new chemical bonds, and absent external forces acting on the particle).
Concentration dependent "collective" diffusion
Collective diffusion is the diffusion of a large number of particles, most often within a solvent.
Contrary to brownian motion, which is the diffusion of a single particle, interactions between particles may have to be considered, unless the particles form an ideal mix with their solvent (ideal mix conditions correspond to the case where the interactions between the solvent and particles are identical to the interactions between particles and the interactions between solvent molecules; in this case, the particles do not interact when inside the solvent).
In case of an ideal mix, the particle diffusion equation holds true and the diffusion coefficient D the speed of diffusion in the particle diffusion equation is independent of particle concentration. In other cases, resulting interactions between particles within the solvent will account for the following effects:
the diffusion coefficient D in the particle diffusion equation becomes dependent of concentration. For an attractive interaction between particles, the diffusion coefficient tends to decrease as concentration increases. For a repulsive interaction between particles, the diffusion coefficient tends to increase as concentration increases.
In the case of an attractive interaction between particles, particles exhibit a tendency to coalesce and form clusters if their concentration lies above a certain threshold. This is equivalent to a precipitation chemical reaction (and if the considered diffusing particles are chemical molecules in solution, then it is a precipitation).
Molecular diffusion of gases
Transport of material in stagnant fluid or across streamlines of a fluid in a laminar flow occurs by molecular diffusion. Two adjacent compartments separated by a partition, containing pure gases A or B may be envisaged. Random movement of all molecules occurs so that after a period molecules are found remote from their original positions. If the partition is removed, some molecules of A move towards the region occupied by B, their number depends on the number of molecules at the region considered. Concurrently, molecules of B diffuse toward regimens formerly occupied by pure A.
Finally, complete mixing occurs. Before this point in time, a gradual variation in the concentration of A occurs along an axis, designated x, which joins the original compartments. This variation, expressed mathematically as -dCA/dx, where CA is the concentration of A. The negative sign arises because the concentration of A decreases as the distance x increases. Similarly, the variation in the concentration of gas B is -dCB/dx. The rate of diffusion of A, NA, depend on concentration gradient and the average velocity with which the molecules of A moves in the x direction. This relationship is expressed by Fick's law
(only applicable for no bulk motion)
where D is the diffusivity of A through B, proportional to the average molecular velocity and, therefore dependent on the temperature and pressure of gases. The rate of diffusion NA is usually expressed as the number of moles diffusing across unit area in unit time. As with the basic equation of heat transfer, this indicates that the rate of force is directly proportional to the driving force, which is the concentration gradient.
This basic equation applies to a number of situations. Restricting discussion exclusively to steady state conditions, in which neither dCA/dx or dCB/dx change with time, equimolecular counterdiffusion is considered first.
Equimolecular counterdiffusion
If no bulk flow occurs in an element of length dx, the rates of diffusion of two ideal gases (of similar molar volume) A and B must be equal and opposite, that is .
The partial pressure of A changes by dPA over the distance dx. Similarly, the partial pressure of B changes dPB. As there is no difference in total pressure across the element (no bulk flow), we have
.
For an ideal gas the partial pressure is related to the molar concentration by the relation
where nA is the number of moles of gas A in a volume V. As the molar concentration CA is equal to nA/ V therefore
Consequently, for gas A,
where DAB is the diffusivity of A in B. Similarly,
Considering that dPA/dx=-dPB/dx, it therefore proves that DAB=DBA=D. If the partial pressure of A at x1 is PA1 and x2 is PA2, integration of above equation,
A similar equation may be derived for the counterdiffusion of gas B.
See also
References
External links
Some pictures that display diffusion and osmosis
An animation describing diffusion.
A tutorial on the theory behind and solution of the Diffusion Equation.
NetLogo Simulation Model for Educational Use (Java Applet)
Short movie on brownian motion (includes calculation of the diffusion coefficient)
A basic introduction to the classical theory of volume diffusion (with figures and animations)
Diffusion on the nanoscale (with figures and animations)
Transport phenomena
Diffusion
Underwater diving physics | 0.784252 | 0.985793 | 0.77311 |
Pourbaix diagram | In electrochemistry, and more generally in solution chemistry, a Pourbaix diagram, also known as a potential/pH diagram, EH–pH diagram or a pE/pH diagram, is a plot of possible thermodynamically stable phases (i.e., at chemical equilibrium) of an aqueous electrochemical system. Boundaries (50 %/50 %) between the predominant chemical species (aqueous ions in solution, or solid phases) are represented by lines. As such a Pourbaix diagram can be read much like a standard phase diagram with a different set of axes. Similarly to phase diagrams, they do not allow for reaction rate or kinetic effects. Beside potential and pH, the equilibrium concentrations are also dependent upon, e.g., temperature, pressure, and concentration. Pourbaix diagrams are commonly given at room temperature, atmospheric pressure, and molar concentrations of 10−6 and changing any of these parameters will yield a different diagram.
The diagrams are named after Marcel Pourbaix (1904–1998), the Russian-born Belgian chemist who invented them.
Naming
Pourbaix diagrams are also known as EH-pH diagrams due to the labeling of the two axes.
Diagram
The vertical axis is labeled EH for the voltage potential with respect to the standard hydrogen electrode (SHE) as calculated by the Nernst equation. The "H" stands for hydrogen, although other standards may be used, and they are for room temperature only.
For a reversible redox reaction described by the following chemical equilibrium:
With the corresponding equilibrium constant :
The Nernst equation is:
sometimes formulated as:
or, more simply directly expressed numerically as:
where:
volt is the thermal voltage or the "Nernst slope" at standard temperature
λ = ln(10) ≈ 2.30, so that volt.
The horizontal axis is labeled pH for the −log function of the H+ ion activity.
The lines in the Pourbaix diagram show the equilibrium conditions, that is, where the activities are equal, for the species on each side of that line. On either side of the line, one form of the species will instead be said to be predominant.
In order to draw the position of the lines with the Nernst equation, the activity of the chemical species at equilibrium must be defined. Usually, the activity of a species is approximated as equal to the concentration (for soluble species) or partial pressure (for gases). The same values should be used for all species present in the system.
For soluble species, the lines are often drawn for concentrations of 1 M or 10−6 M. Sometimes additional lines are drawn for other concentrations.
If the diagram involves the equilibrium between a dissolved species and a gas, the pressure is usually set to P0 = 1 atm = , the minimum pressure required for gas evolution from an aqueous solution at standard conditions.
In addition, changes in temperature and concentration of solvated ions in solution will shift the equilibrium lines in accordance with the Nernst equation.
The diagrams also do not take kinetic effects into account, meaning that species shown as unstable might not react to any significant degree in practice.
A simplified Pourbaix diagram indicates regions of "immunity", "corrosion" and "passivity", instead of the stable species. They thus give a guide to the stability of a particular metal in a specific environment. Immunity means that the metal is not attacked, while corrosion shows that general attack will occur. Passivation occurs when the metal forms a stable coating of an oxide or other salt on its surface, the best example being the relative stability of aluminium because of the alumina layer formed on its surface when exposed to air.
Applicable chemical systems
While such diagrams can be drawn for any chemical system, it is important to note that the addition of a metal binding agent (ligand) will often modify the diagram. For instance, carbonate has a great effect upon the diagram for uranium. (See diagrams at right). The presence of trace amounts of certain species such as chloride ions can also greatly affect the stability of certain species by destroying passivating layers.
Limitations
Even though Pourbaix diagrams are useful for a metal corrosion potential estimation they have, however, some important limitations:
Equilibrium is always assumed, though in practice it may differ.
The diagram does not provide information on actual corrosion rates.
Does not apply to alloys.
Does not indicate whether passivation (in the form of oxides or hydroxides) is protective or not. Diffusion of oxygen ions through thin oxide layers are possible.
Excludes corrosion by chloride ions (, etc.).
Usually applicable only to temperature of , which is assumed by default. The Pourbaix diagrams for higher temperatures exist.
Expression of the Nernst equation as a function of pH
The and pH of a solution are related by the Nernst equation as commonly represented by a Pourbaix diagram . explicitly denotes expressed versus the standard hydrogen electrode (SHE). For a half cell equation, conventionally written as a reduction reaction (i.e., electrons accepted by an oxidant on the left side):
The equilibrium constant of this reduction reaction is:
where curly braces { } indicate activities, rectangle braces [ ] denote molar or molal concentrations, represent the activity coefficients, and the stoichiometric coefficients are shown as exponents.
Activities correspond to thermodynamic concentrations and take into account the electrostatic interactions between ions present in solution. When the concentrations are not too high, the activity can be related to the measurable concentration by a linear relationship with the activity coefficient:
The half-cell standard reduction potential is given by
where is the standard Gibbs free energy change, is the number of electrons involved, and is the Faraday's constant. The Nernst equation relates pH and as follows:
In the following, the Nernst slope (or thermal voltage) is used, which has a value of 0.02569... V at STP. When base-10 logarithms are used, VT λ = 0.05916... V at STP where λ = ln[10] = 2.3026.
This equation is the equation of a straight line for as a function of pH with a slope of volt (pH has no units).
This equation predicts lower at higher pH values. This is observed for the reduction of O2 into H2O, or OH−, and for reduction of H+ into H2. is then often noted as to indicate that it refers to the standard hydrogen electrode (SHE) whose = 0 by convention under standard conditions (T = 298.15 K = 25 °C = 77 F, Pgas = 1 atm (1.013 bar), concentrations = 1 M and thus pH = 0).
Calculation of a Pourbaix diagram
When the activities can be considered as equal to the molar, or the molal, concentrations at sufficiently diluted concentrations when the activity coefficients tend to one, the term regrouping all the activity coefficients is equal to one, and the Nernst equation can be written simply with the concentrations denoted here with square braces [ ]:
There are three types of line boundaries in a Pourbaix diagram: Vertical, horizontal, and sloped.
Vertical boundary line
When no electrons are exchanged (z = 0), the equilibrium between , , , and only depends on and is not affected by the electrode potential. In this case, the reaction is a classical acid-base reaction involving only protonation/deprotonation of dissolved species. The boundary line will be a vertical line at a particular value of pH. The reaction equation may be written:
and the energy balance is written as , where is the equilibrium constant:
Thus:
or, in base-10 logarithms,
which may be solved for the particular value of pH.
For example consider the iron and water system, and the equilibrium line between the ferric ion Fe3+ ion and hematite Fe2O3. The reaction equation is:
2 Fe^{3+}(aq) + 3 H_2 O (l) <=> Fe_2 O_3 (s) + 6 H^+ (aq)
which has . The pH of the vertical line on the Pourbaix diagram can then be calculated:
Because the activities (or the concentrations) of the solid phases and water are equal to unity: [Fe2O3] = [H2O] = 1, the pH only depends on the concentration in dissolved :
At STP, for [Fe3+] = 10−6, this yields pH = 1.76.
Horizontal boundary line
When H+ and OH− ions are not involved in the reaction, the boundary line is horizontal and independent of pH. The reaction equation is thus written:
As, the standard Gibbs free energy :
Using the definition of the electrode potential ∆G = -zFE, where F is the Faraday constant, this may be rewritten as a Nernst equation:
or, using base-10 logarithms:
For the equilibrium /, taken as example here, considering the boundary line between Fe2+ and Fe3+, the half-reaction equation is:
Fe^3+ (aq) + e^- <=> Fe^2+ (aq)
Since H+ ions are not involved in this redox reaction, it is independent of pH.
Eo = 0.771 V with only one electron involved in the redox reaction.
The potential Eh is a function of temperature via the thermal voltage and directly depends on the ratio of the concentrations of the and ions:
For both ionic species at the same concentration (e.g., ) at STP, log 1 = 0, so, , and the boundary will be a horizontal line at Eh = 0.771 volts. The potential will vary with temperature.
Sloped boundary line
In this case, both electrons and H+ ions are involved and the electrode potential is a function of pH. The reaction equation may be written:
Using the expressions for the free energy in terms of potentials, the energy balance is given by a Nernst equation:
For the iron and water example, considering the boundary line between the ferrous ion Fe2+ and hematite Fe2O3, the reaction equation is:
Fe2O3(s) + 6 H+(aq) + 2 e^- <=> 2 Fe^{2+}(aq) + 3 H2O(l)
with .
The equation of the boundary line, expressed in base-10 logarithms is:
As, the activities, or the concentrations, of the solid phases and water are always taken equal to unity by convention in the definition of the equilibrium constant : [Fe2O3] = [H2O] = 1.
The Nernst equation thus limited to the dissolved species and is written as:
For, [Fe2+] = 10−6 M, this yields:
Note the negative slope (-0.1775) of this line in a Eh–pH diagram.
The stability region of water
In many cases, the possible conditions in a system are limited by the stability region of water. In the Pourbaix diagram for uranium presented here above, the limits of stability of water are marked by the two dashed green lines, and the stability region for water falls between these two lines. It is also depicted here beside by the two dashed red lines in the simplified Pourbaix diagram restricted to the water stability region only.
Under highly reducing conditions (low EH), water is reduced to hydrogen according to:
2 H+ + 2e^- -> H2(g) (at low pH)
and,
2 H2O + 2e^- -> H2(g) + 2 OH^- (at high pH)
Using the Nernst equation, setting E0 = 0 V as defined by convention for the standard hydrogen electrode (SHE, serving as reference in the reduction potentials series) and the hydrogen gas fugacity (corresponding to chemical activity for a gas) at 1, the equation for the lower stability line of water in the Pourbaix diagram at standard temperature and pressure is:
Below this line, water is reduced to hydrogen, and it will usually not be possible to pass beyond this line as long as there is still water present in the system to be reduced.
Correspondingly, under highly oxidizing conditions (high EH) water is oxidized into oxygen gas according to:
2 H2O -> 4 H+ + O2(g) + 4e^- (at low pH)
and,
4 OH^- -> O2(g) + 2 H_2O + 4e^- (at high pH)
Using the Nernst equation as above, but with E0 = −ΔG0H2O/2F = 1.229 V for water oxidation, gives an upper stability limit of water as a function of the pH value:
at standard temperature and pressure. Above this line, water is oxidized to form oxygen gas, and it will usually not be possible to pass beyond this line as long as there is still water present in the system to be oxidized.
The two upper and lower stability lines having the same negative slope (−59 mV/pH unit), they are parallel in a Pourbaix diagram and the reduction potential decreases with pH.
Applications
Pourbaix diagrams have many applications in different fields dealing with e.g., corrosion problems, geochemistry, and environmental sciences. Using the Pourbaix diagram correctly will help shedding light not only on the nature of the species present in aqueous solution, or in the solid phases, but may also help to understand the reaction mechanism.
Concept of in environmental chemistry
Pourbaix diagrams are widely used to describe the behaviour of chemical species in the hydrosphere. In this context, reduction potential is often used instead of . The main advantage is to directly work with a logarithm scale.
is a dimensionless number and can easily be related to by the equation:
Where, is the thermal voltage, with , the gas constant, , the absolute temperature in Kelvin (298.15 K = 25 °C = 77 °F), and , the Faraday constant (96 485 coulomb/mol of ). Lambda, λ = ln(10) ≈ 2.3026.
Moreover,
, an expression with a similar form to that of pH.
values in environmental chemistry ranges from −12 to +25, since at low or high potentials water will be respectively reduced or oxidized. In environmental applications, the concentration of dissolved species is usually set to a value between 10−2 M and 10−5 M for the determination of the equilibrium lines.
Gallery
See also
Nernst equation
Dependency of reduction potential on pH
Ellingham diagram
Latimer diagram
Frost diagram
Ionic partition diagram
Bjerrum plot
Notes
References
External links
Marcel Pourbaix — Corrosion Doctors
DoITPoMS Teaching and Learning Package- "The Nernst Equation and Pourbaix Diagrams"
Software
ChemEQL Free software for calculation of chemical equilibria from Eawag.
FactSage Commercial thermodynamic databank software, also available in a free web application.
The Geochemist's Workbench Commercial geochemical modeling software from Aqueous Solutions LLC.
GWB Community Edition Free download of the popular geochemical modeling software package.
HYDRA/MEDUSA Free software for creating chemical equilibrium diagrams from the KTH Department of Chemistry.
HSC Chemistry Commercial thermochemical calculation software from Outotec Oy.
PhreePlot Free program for making geochemical plots using the USGS code PHREEQC.
Thermo-Calc Windows Commercial software for thermodynamic calculations from Thermo-Calc Software.
Materials Project Public website that can generate Pourbaix diagrams from a large database of computed material properties, hosted at NERSC.
Electrochemistry
Phase transitions | 0.780374 | 0.990672 | 0.773094 |
Magnum opus (alchemy) | In alchemy, the Magnum Opus or Great Work is a term for the process of working with the prima materia to create the philosopher's stone. It has been used to describe personal and spiritual transmutation in the Hermetic tradition, attached to laboratory processes and chemical color changes, used as a model for the individuation process, and as a device in art and literature. The magnum opus has been carried forward in New Age and neo-Hermetic movements which sometimes attached new symbolism and significance to the processes. The original process philosophy has four stages:
nigredo, the blackening or melanosis
albedo, the whitening or leucosis
citrinitas, the yellowing or xanthosis
rubedo, the reddening, purpling, or iosis
The origin of these four phases can be traced at least as far back as the first century. Zosimus of Panopolis wrote that it was known to Mary the Jewess. The development of black, white, yellow, and red can also be found in the Physika kai Mystika of Pseudo-Democritus, which is often considered to be one of the oldest books on alchemy. After the 15th century, many writers tended to compress citrinitas into rubedo and consider only three stages. Other color stages are sometimes mentioned, most notably the cauda pavonis (peacock's tail) in which an array of colors appear.
The magnum opus had a variety of alchemical symbols attached to it. Birds like the raven, swan, and phoenix could be used to represent the progression through the colors. Similar color changes could be seen in the laboratory, where for example, the blackness of rotting, burnt, or fermenting matter would be associated with nigredo.
Expansion on the four stages
Alchemical authors sometimes elaborated on the three or four color model by enumerating a variety of chemical steps to be performed. Though these were often arranged in groups of seven or twelve stages, there is little consistency in the names of these processes, their number, their order, or their description.
Various alchemical documents were directly or indirectly used to justify these stages. The Tabula Smaragdina is the oldest document said to provide a "recipe". Others include the Mutus Liber, the twelve keys of Basil Valentine, the emblems of Steffan Michelspacher, and the twelve gates of George Ripley. Ripley's steps are given as:
In another example from the sixteenth century, Samuel Norton gives the following fourteen stages:
Some alchemists also circulated steps for the creation of practical medicines and substances, that have little to do with the magnum opus. The cryptic and often symbolic language used to describe both adds to the confusion, but it's clear that there is no single standard step-by-step recipe given for the creation of the philosopher's stone.
Magnum opus in literature and entertainment
References | 0.774528 | 0.998057 | 0.773023 |
Solubility equilibrium | Solubility equilibrium is a type of dynamic equilibrium that exists when a chemical compound in the solid state is in chemical equilibrium with a solution of that compound. The solid may dissolve unchanged, with dissociation, or with chemical reaction with another constituent of the solution, such as acid or alkali. Each solubility equilibrium is characterized by a temperature-dependent solubility product which functions like an equilibrium constant. Solubility equilibria are important in pharmaceutical, environmental and many other scenarios.
Definitions
A solubility equilibrium exists when a chemical compound in the solid state is in chemical equilibrium with a solution containing the compound. This type of equilibrium is an example of dynamic equilibrium in that some individual molecules migrate between the solid and solution phases such that the rates of dissolution and precipitation are equal to one another. When equilibrium is established and the solid has not all dissolved, the solution is said to be saturated. The concentration of the solute in a saturated solution is known as the solubility. Units of solubility may be molar (mol dm−3) or expressed as mass per unit volume, such as μg mL−1. Solubility is temperature dependent. A solution containing a higher concentration of solute than the solubility is said to be supersaturated. A supersaturated solution may be induced to come to equilibrium by the addition of a "seed" which may be a tiny crystal of the solute, or a tiny solid particle, which initiates precipitation.
There are three main types of solubility equilibria.
Simple dissolution.
Dissolution with dissociation reaction. This is characteristic of salts. The equilibrium constant is known in this case as a solubility product.
Dissolution with ionization reaction. This is characteristic of the dissolution of weak acids or weak bases in aqueous media of varying pH.
In each case an equilibrium constant can be specified as a quotient of activities. This equilibrium constant is dimensionless as activity is a dimensionless quantity. However, use of activities is very inconvenient, so the equilibrium constant is usually divided by the quotient of activity coefficients, to become a quotient of concentrations. See Equilibrium chemistry#Equilibrium constant for details. Moreover, the activity of a solid is, by definition, equal to 1 so it is omitted from the defining expression.
For a chemical equilibrium
the solubility product, Ksp for the compound ApBq is defined as follows
where [A] and [B] are the concentrations of A and B in a saturated solution. A solubility product has a similar functionality to an equilibrium constant though formally Ksp has the dimension of (concentration)p+q.
Effects of conditions
Temperature effect
Solubility is sensitive to changes in temperature. For example, sugar is more soluble in hot water than cool water. It occurs because solubility products, like other types of equilibrium constants, are functions of temperature. In accordance with Le Chatelier's Principle, when the dissolution process is endothermic (heat is absorbed), solubility increases with rising temperature. This effect is the basis for the process of recrystallization, which can be used to purify a chemical compound. When dissolution is exothermic (heat is released) solubility decreases with rising temperature.
Sodium sulfate shows increasing solubility with temperature below about 32.4 °C, but a decreasing solubility at higher temperature. This is because the solid phase is the decahydrate below the transition temperature, but a different hydrate above that temperature.
The dependence on temperature of solubility for an ideal solution (achieved for low solubility substances) is given by the following expression containing the enthalpy of melting, ΔmH, and the mole fraction of the solute at saturation:
where is the partial molar enthalpy of the solute at infinite dilution and the enthalpy per mole of the pure crystal.
This differential expression for a non-electrolyte can be integrated on a temperature interval to give:
For nonideal solutions activity of the solute at saturation appears instead of mole fraction solubility in the derivative with respect to temperature:
Common-ion effect
The common-ion effect is the effect of decreased solubility of one salt when another salt that has an ion in common with it is also present. For example, the solubility of silver chloride, AgCl, is lowered when sodium chloride, a source of the common ion chloride, is added to a suspension of AgCl in water.
The solubility, S, in the absence of a common ion can be calculated as follows. The concentrations [Ag+] and [Cl−] are equal because one mole of AgCl would dissociate into one mole of Ag+ and one mole of Cl−. Let the concentration of [Ag+(aq)] be denoted by x. Then
Ksp for AgCl is equal to at 25 °C, so the solubility is .
Now suppose that sodium chloride is also present, at a concentration of 0.01 mol dm−3 = 0.01 M. The solubility, ignoring any possible effect of the sodium ions, is now calculated by
This is a quadratic equation in x, which is also equal to the solubility.
In the case of silver chloride, x2 is very much smaller than 0.01 M x, so the first term can be ignored. Therefore
a considerable reduction from . In gravimetric analysis for silver, the reduction in solubility due to the common ion effect is used to ensure "complete" precipitation of AgCl.
Particle size effect
The thermodynamic solubility constant is defined for large monocrystals. Solubility will increase with decreasing size of solute particle (or droplet) because of the additional surface energy. This effect is generally small unless particles become very small, typically smaller than 1 μm. The effect of the particle size on solubility constant can be quantified as follows:
where *KA is the solubility constant for the solute particles with the molar surface area A, *KA→0 is the solubility constant for substance with molar surface area tending to zero (i.e., when the particles are large), γ is the surface tension of the solute particle in the solvent, Am is the molar surface area of the solute (in m2/mol), R is the universal gas constant, and T is the absolute temperature.
Salt effects
The salt effects (salting in and salting-out) refers to the fact that the presence of a salt which has no ion in common with the solute, has an effect on the ionic strength of the solution and hence on activity coefficients, so that the equilibrium constant, expressed as a concentration quotient, changes.
Phase effect
Equilibria are defined for specific crystal phases. Therefore, the solubility product is expected to be different depending on the phase of the solid. For example, aragonite and calcite will have different solubility products even though they have both the same chemical identity (calcium carbonate). Under any given conditions one phase will be thermodynamically more stable than the other; therefore, this phase will form when thermodynamic equilibrium is established. However, kinetic factors may favor the formation the unfavorable precipitate (e.g. aragonite), which is then said to be in a metastable state.
In pharmacology, the metastable state is sometimes referred to as amorphous state. Amorphous drugs have higher solubility than their crystalline counterparts due to the absence of long-distance interactions inherent in crystal lattice. Thus, it takes less energy to solvate the molecules in amorphous phase. The effect of amorphous phase on solubility is widely used to make drugs more soluble.
Pressure effect
For condensed phases (solids and liquids), the pressure dependence of solubility is typically weak and usually neglected in practice. Assuming an ideal solution, the dependence can be quantified as:
where is the mole fraction of the -th component in the solution, is the pressure, is the absolute temperature, is the partial molar volume of the th component in the solution, is the partial molar volume of the th component in the dissolving solid, and is the universal gas constant.
The pressure dependence of solubility does occasionally have practical significance. For example, precipitation fouling of oil fields and wells by calcium sulfate (which decreases its solubility with decreasing pressure) can result in decreased productivity with time.
Quantitative aspects
Simple dissolution
Dissolution of an organic solid can be described as an equilibrium between the substance in its solid and dissolved forms. For example, when sucrose (table sugar) forms a saturated solution
An equilibrium expression for this reaction can be written, as for any chemical reaction (products over reactants):
where Ko is called the thermodynamic solubility constant. The braces indicate activity. The activity of a pure solid is, by definition, unity. Therefore
The activity of a substance, A, in solution can be expressed as the product of the concentration, [A], and an activity coefficient, γ. When Ko is divided by γ, the solubility constant, Ks,
is obtained. This is equivalent to defining the standard state as the saturated solution so that the activity coefficient is equal to one. The solubility constant is a true constant only if the activity coefficient is not affected by the presence of any other solutes that may be present. The unit of the solubility constant is the same as the unit of the concentration of the solute. For sucrose Ks = 1.971 mol dm−3 at 25 °C. This shows that the solubility of sucrose at 25 °C is nearly 2 mol dm−3 (540 g/L). Sucrose is unusual in that it does not easily form a supersaturated solution at higher concentrations, as do most other carbohydrates.
Dissolution with dissociation
Ionic compounds normally dissociate into their constituent ions when they dissolve in water. For example, for silver chloride:
AgCl_{(s)} <=> Ag^+_{(aq)}{} + Cl^-_{(aq)}
The expression for the equilibrium constant for this reaction is:
where is the thermodynamic equilibrium constant and braces indicate activity. The activity of a pure solid is, by definition, equal to one.
When the solubility of the salt is very low the activity coefficients of the ions in solution are nearly equal to one. By setting them to be actually equal to one this expression reduces to the solubility product expression:
For 2:2 and 3:3 salts, such as CaSO4 and FePO4, the general expression for the solubility product is the same as for a 1:1 electrolyte
(electrical charges are omitted in general expressions, for simplicity of notation)
With an unsymmetrical salt like Ca(OH)2 the solubility expression is given by
Since the concentration of hydroxide ions is twice the concentration of calcium ions this reduces to
In general, with the chemical equilibrium
and the following table, showing the relationship between the solubility of a compound and the value of its solubility product, can be derived.
{| class="wikitable"
!Salt ||p||q||Solubility, S
|-
!AgClCa(SO4)Fe(PO4)
| 1|| 1||
|-
!Na2(SO4)Ca(OH)2
| 21|| 12||
|-
!Na3(PO4)FeCl3
|31|| 13 ||
|-
!Al2(SO4)3Ca3(PO4)2
|23||32||
|-
!Mp(An)q
|p
|q
|
|}
Solubility products are often expressed in logarithmic form. Thus, for calcium sulfate, with , . The smaller the value of Ksp, or the more negative the log value, the lower the solubility.
Some salts are not fully dissociated in solution. Examples include MgSO4, famously discovered by Manfred Eigen to be present in seawater as both an inner sphere complex and an outer sphere complex. The solubility of such salts is calculated by the method outlined in dissolution with reaction.
Hydroxides
The solubility product for the hydroxide of a metal ion, Mn+, is usually defined, as follows:
However, general-purpose computer programs are designed to use hydrogen ion concentrations with the alternative definitions.
For hydroxides, solubility products are often given in a modified form, K*sp, using hydrogen ion concentration in place of hydroxide ion concentration. The two values are related by the self-ionization constant for water, Kw.
For example, at ambient temperature, for calcium hydroxide, Ca(OH)2, lg Ksp is ca. −5 and lg K*sp ≈ −5 + 2 × 14 ≈ 23.
Dissolution with reaction
A typical reaction with dissolution involves a weak base, B, dissolving in an acidic aqueous solution.
This reaction is very important for pharmaceutical products. Dissolution of weak acids in alkaline media is similarly important.
The uncharged molecule usually has lower solubility than the ionic form, so solubility depends on pH and the acid dissociation constant of the solute. The term "intrinsic solubility" is used to describe the solubility of the un-ionized form in the absence of acid or alkali.
Leaching of aluminium salts from rocks and soil by acid rain is another example of dissolution with reaction: alumino-silicates are bases which react with the acid to form soluble species, such as Al3+(aq).
Formation of a chemical complex may also change solubility. A well-known example is the addition of a concentrated solution of ammonia to a suspension of silver chloride, in which dissolution is favoured by the formation of an ammine complex.
When sufficient ammonia is added to a suspension of silver chloride, the solid dissolves. The addition of water softeners to washing powders to inhibit the formation of soap scum provides an example of practical importance.
Experimental determination
The determination of solubility is fraught with difficulties. First and foremost is the difficulty in establishing that the system is in equilibrium at the chosen temperature. This is because both precipitation and dissolution reactions may be extremely slow. If the process is very slow solvent evaporation may be an issue. Supersaturation may occur. With very insoluble substances, the concentrations in solution are very low and difficult to determine. The methods used fall broadly into two categories, static and dynamic.
Static methods
In static methods a mixture is brought to equilibrium and the concentration of a species in the solution phase is determined by chemical analysis. This usually requires separation of the solid and solution phases. In order to do this the equilibration and separation should be performed in a thermostatted room. Very low concentrations can be measured if a radioactive tracer is incorporated in the solid phase.
A variation of the static method is to add a solution of the substance in a non-aqueous solvent, such as dimethyl sulfoxide, to an aqueous buffer mixture. Immediate precipitation may occur giving a cloudy mixture. The solubility measured for such a mixture is known as "kinetic solubility". The cloudiness is due to the fact that the precipitate particles are very small resulting in Tyndall scattering. In fact the particles are so small that the particle size effect comes into play and kinetic solubility is often greater than equilibrium solubility. Over time the cloudiness will disappear as the size of the crystallites increases, and eventually equilibrium will be reached in a process known as precipitate ageing.
Dynamic methods
Solubility values of organic acids, bases, and ampholytes of pharmaceutical interest may be obtained by a process called "Chasing equilibrium solubility". In this procedure, a quantity of substance is first dissolved at a pH where it exists predominantly in its ionized form and then a precipitate of the neutral (un-ionized) species is formed by changing the pH. Subsequently, the rate of change of pH due to precipitation or dissolution is monitored and strong acid and base titrant are added to adjust the pH to discover the equilibrium conditions when the two rates are equal. The advantage of this method is that it is relatively fast as the quantity of precipitate formed is quite small. However, the performance of the method may be affected by the formation supersaturated solutions.
See also
Solubility table: A table of solubilities of mostly inorganic salts at temperatures between 0 and 100 °C.
Solvent models
References
External links
Section 6.9: Solubilities of ionic salts. Includes a discussion of the thermodynamics of dissolution.
IUPAC–NIST solubility database
Solubility products of simple inorganic compounds
Solvent activity along a saturation line and solubility
Solubility challenge: Predict solubilities from a data base of 100 molecules. The database, of mostly compounds of pharmaceutical interest, is available at One hundred molecules with solubilities (Text file, tab separated).
A number of computer programs are available to do the calculations. They include:
CHEMEQL: A comprehensive computer program for the calculation of thermodynamic equilibrium concentrations of species in homogeneous and heterogeneous systems. Many geochemical applications.
JESS: All types of chemical equilibria can be modelled including protonation, complex formation, redox, solubility and adsorption interactions. Includes an extensive database.
MINEQL+: A chemical equilibrium modeling system for aqueous systems. Handles a wide range of pH, redox, solubility and sorption scenarios.
PHREEQC: USGS software designed to perform a wide variety of low-temperature aqueous geochemical calculations, including reactive transport in one dimension.
MINTEQ: A chemical equilibrium model for the calculation of metal speciation, solubility equilibria etc. for natural waters.
WinSGW: A Windows version of the SOLGASWATER computer program.
Equilibrium chemistry
Solutions | 0.783931 | 0.98599 | 0.772948 |
Extent of reaction | In physical chemistry and chemical engineering, extent of reaction is a quantity that measures the extent to which the reaction has proceeded. Often, it refers specifically to the value of the extent of reaction when equilibrium has been reached. It is usually denoted by the Greek letter ξ. The extent of reaction is usually defined so that it has units of amount (moles). It was introduced by the Belgian scientist Théophile de Donder.
Definition
Consider the reaction
A ⇌ 2 B + 3 C
Suppose an infinitesimal amount of the reactant A changes into B and C. This requires that all three mole numbers change according to the stoichiometry of the reaction, but they will not change by the same amounts. However, the extent of reaction can be used to describe the changes on a common footing as needed. The change of the number of moles of A can be represented by the equation , the change of B is , and the change of C is .
The change in the extent of reaction is then defined as
where denotes the number of moles of the reactant or product and is the stoichiometric number of the reactant or product. Although less common, we see from this expression that since the stoichiometric number can either be considered to be dimensionless or to have units of moles, conversely the extent of reaction can either be considered to have units of moles or to be a unitless mole fraction.
The extent of reaction represents the amount of progress made towards equilibrium in a chemical reaction. Considering finite changes instead of infinitesimal changes, one can write the equation for the extent of a reaction as
The extent of a reaction is generally defined as zero at the beginning of the reaction. Thus the change of is the extent itself. Assuming that the system has come to equilibrium,
Although in the example above the extent of reaction was positive since the system shifted in the forward direction, this usage implies that in general the extent of reaction can be positive or negative, depending on the direction that the system shifts from its initial composition.
Relations
The relation between the change in Gibbs reaction energy and Gibbs energy can be defined as the slope of the Gibbs energy plotted against the extent of reaction at constant pressure and temperature.
This formula leads to the Nernst equation when applied to the oxidation-reduction reaction which generates the voltage of a voltaic cell. Analogously, the relation between the change in reaction enthalpy and enthalpy can be defined. For example,
Example
The extent of reaction is a useful quantity in computations with equilibrium reactions. Consider the reaction
2 A ⇌ B + 3 C
where the initial amounts are , and the equilibrium amount of A is 0.5 mol. We can calculate the extent of reaction in equilibrium from its definition
In the above, we note that the stoichiometric number of a reactant is negative. Now when we know the extent, we can rearrange the equation and calculate the equilibrium amounts of B and C.
References
Physical chemistry
Analytical chemistry | 0.788451 | 0.980265 | 0.772891 |
Structural bioinformatics | Structural bioinformatics is the branch of bioinformatics that is related to the analysis and prediction of the three-dimensional structure of biological macromolecules such as proteins, RNA, and DNA. It deals with generalizations about macromolecular 3D structures such as comparisons of overall folds and local motifs, principles of molecular folding, evolution, binding interactions, and structure/function relationships, working both from experimentally solved structures and from computational models. The term structural has the same meaning as in structural biology, and structural bioinformatics can be seen as a part of computational structural biology. The main objective of structural bioinformatics is the creation of new methods of analysing and manipulating biological macromolecular data in order to solve problems in biology and generate new knowledge.
Introduction
Protein structure
The structure of a protein is directly related to its function. The presence of certain chemical groups in specific locations allows proteins to act as enzymes, catalyzing several chemical reactions. In general, protein structures are classified into four levels: primary (sequences), secondary (local conformation of the polypeptide chain), tertiary (three-dimensional structure of the protein fold), and quaternary (association of multiple polypeptide structures). Structural bioinformatics mainly addresses interactions among structures taking into consideration their space coordinates. Thus, the primary structure is better analyzed in traditional branches of bioinformatics. However, the sequence implies restrictions that allow the formation of conserved local conformations of the polypeptide chain, such as alpha-helix, beta-sheets, and loops (secondary structure). Also, weak interactions (such as hydrogen bonds) stabilize the protein fold. Interactions could be intrachain, i.e., when occurring between parts of the same protein monomer (tertiary structure), or interchain, i.e., when occurring between different structures (quaternary structure). Finally, the topological arrangement of interactions, whether strong or weak, and entanglements is being studied in the field of structural bioinformatics, utilizing frameworks such as circuit topology.
Structure visualization
Protein structure visualization is an important issue for structural bioinformatics. It allows users to observe static or dynamic representations of the molecules, also allowing the detection of interactions that may be used to make inferences about molecular mechanisms. The most common types of visualization are:
Cartoon: this type of protein visualization highlights the secondary structure differences. In general, α-helix is represented as a type of screw, β-strands as arrows, and loops as lines.
Lines: each amino acid residue is represented by thin lines, which allows a low cost for graphic rendering.
Surface: in this visualization, the external shape of the molecule is shown.
Sticks: each covalent bond between amino acid atoms is represented as a stick. This type of visualization is most used to visualize interactions between amino acids...
DNA structure
The classic DNA duplexes structure was initially described by Watson and Crick (and contributions of Rosalind Franklin). The DNA molecule is composed of three substances: a phosphate group, a pentose, and a nitrogen base (adenine, thymine, cytosine, or guanine). The DNA double helix structure is stabilized by hydrogen bonds formed between base pairs: adenine with thymine (A-T) and cytosine with guanine (C-G). Many structural bioinformatics studies have focused on understanding interactions between DNA and small molecules, which has been the target of several drug design studies.
Interactions
Interactions are contacts established between parts of molecules at different levels. They are responsible for stabilizing protein structures and perform a varied range of activities. In biochemistry, interactions are characterized by the proximity of atom groups or molecules regions that present an effect upon one another, such as electrostatic forces, hydrogen bonding, and hydrophobic effect. Proteins can perform several types of interactions, such as protein-protein interactions (PPI), protein-peptide interactions, protein-ligand interactions (PLI), and protein-DNA interaction.
Calculating contacts
Calculating contacts is an important task in structural bioinformatics, being important for the correct prediction of protein structure and folding, thermodynamic stability, protein-protein and protein-ligand interactions, docking and molecular dynamics analyses, and so on.
Traditionally, computational methods have used threshold distance between atoms (also called cutoff) to detect possible interactions. This detection is performed based on Euclidean distance and angles between atoms of determined types. However, most of the methods based on simple Euclidean distance cannot detect occluded contacts. Hence, cutoff free methods, such as Delaunay triangulation, have gained prominence in recent years. In addition, the combination of a set of criteria, for example, physicochemical properties, distance, geometry, and angles, have been used to improve the contact determination.
Protein Data Bank (PDB)
The Protein Data Bank (PDB) is a database of 3D structure data for large biological molecules, such as proteins, DNA, and RNA. PDB is managed by an international organization called the Worldwide Protein Data Bank (wwPDB), which is composed of several local organizations, as. PDBe, PDBj, RCSB, and BMRB. They are responsible for keeping copies of PDB data available on the internet at no charge. The number of structure data available at PDB has increased each year, being obtained typically by X-ray crystallography, NMR spectroscopy, or cryo-electron microscopy.
Data format
The PDB format (.pdb) is the legacy textual file format used to store information of three-dimensional structures of macromolecules used by the Protein Data Bank. Due to restrictions in the format structure conception, the PDB format does not allow large structures containing more than 62 chains or 99999 atom records.
The PDBx/mmCIF (macromolecular Crystallographic Information File) is a standard text file format for representing crystallographic information. Since 2014, the PDB format was substituted as the standard PDB archive distribution by the PDBx/mmCIF file format (.cif). While PDB format contains a set of records identified by a keyword of up to six characters, the PDBx/mmCIF format uses a structure based on key and value, where the key is a name that identifies some feature and the value is the variable information.
Other structural databases
In addition to the Protein Data Bank (PDB), there are several databases of protein structures and other macromolecules. Examples include:
MMDB: Experimentally determined three-dimensional structures of biomolecules derived from Protein Data Bank (PDB).
Nucleic acid Data Base (NDB): Experimentally determined information about nucleic acids (DNA, RNA).
Structural Classification of Proteins (SCOP): Comprehensive description of the structural and evolutionary relationships between structurally known proteins.
TOPOFIT-DB: Protein structural alignments based on the TOPOFIT method.
Electron Density Server (EDS): Electron-density maps and statistics about the fit of crystal structures and their maps.
CASP: Prediction Center Community-wide, worldwide experiment for protein structure prediction CASP.
PISCES server for creating non-redundant lists of proteins: Generates PDB list by sequence identity and structural quality criteria.
The Structural Biology Knowledgebase: Tools to aid in protein research design.
ProtCID: The Protein Common Interface Database Database of similar protein-protein interfaces in crystal structures of homologous proteins.
AlphaFold:AlphaFold - Protein Structure Database.
Structure comparison
Structural alignment
Structural alignment is a method for comparison between 3D structures based on their shape and conformation. It could be used to infer the evolutionary relationship among a set of proteins even with low sequence similarity. Structural alignment implies superimposing a 3D structure over a second one, rotating and translating atoms in corresponding positions (in general, using the Cα atoms or even the backbone heavy atoms C, N, O, and Cα). Usually, the alignment quality is evaluated based on the root-mean-square deviation (RMSD) of atomic positions, i.e., the average distance between atoms after superimposition:
where δi is the distance between atom i and either a reference atom corresponding in the other structure or the mean coordinate of the N equivalent atoms. In general, the RMSD outcome is measured in Ångström (Å) unit, which is equivalent to 10−10 m. The nearer to zero the RMSD value, the more similar are the structures.
Graph-based structural signatures
Structural signatures, also called fingerprints, are macromolecule pattern representations that can be used to infer similarities and differences. Comparisons among a large set of proteins using RMSD still is a challenge due to the high computational cost of structural alignments. Structural signatures based on graph distance patterns among atom pairs have been used to determine protein identifying vectors and to detect non-trivial information. Furthermore, linear algebra and machine learning can be used for clustering protein signatures, detecting protein-ligand interactions, predicting ΔΔG, and proposing mutations based on Euclidean distance.
Structure prediction
The atomic structures of molecules can be obtained by several methods, such as X-ray crystallography (XRC), NMR spectroscopy, and 3D electron microscopy; however, these processes can present high costs and sometimes some structures can be hardly established, such as membrane proteins. Hence, it is necessary to use computational approaches for determining 3D structures of macromolecules. The structure prediction methods are classified into comparative modeling and de novo modeling.
Comparative modeling
Comparative modeling, also known as homology modeling, corresponds to the methodology to construct three-dimensional structures from an amino acid sequence of a target protein and a template with known structure. The literature has described that evolutionarily related proteins tend to present a conserved three-dimensional structure. In addition, sequences of distantly related proteins with identity lower than 20% can present different folds.
De novo modeling
In structural bioinformatics, de novo modeling, also known as ab initio modeling, refers to approaches for obtaining three-dimensional structures from sequences without the necessity of a homologous known 3D structure. Despite the new algorithms and methods proposed in the last years, de novo protein structure prediction is still considered one of the remain outstanding issues in modern science.
Structure validation
After structure modeling, an additional step of structure validation is necessary since many of both comparative and 'de novo' modeling algorithms and tools use heuristics to try assembly the 3D structure, which can generate many errors. Some validation strategies consist of calculating energy scores and comparing them with experimentally determined structures. For example, the DOPE score is an energy score used by the MODELLER tool for determining the best model.
Another validation strategy is calculating φ and ψ backbone dihedral angles of all residues and construct a Ramachandran plot. The side-chain of amino acids and the nature of interactions in the backbone restrict these two angles, and thus, the visualization of allowed conformations could be performed based on the Ramachandran plot. A high quantity of amino acids allocated in no permissive positions of the chart is an indication of a low-quality modeling.
Prediction tools
A list with commonly used software tools for protein structure prediction, including comparative modeling, protein threading, de novo protein structure prediction, and secondary structure prediction is available in the list of protein structure prediction software.
Molecular docking
Molecular docking (also referred to only as docking) is a method used to predict the orientation coordinates of a molecule (ligand) when bound to another one (receptor or target). The binding may be mostly through non-covalent interactions while covalently linked binding can also be studied. Molecular docking aims to predict possible poses (binding modes) of the ligand when it interacts with specific regions on the receptor. Docking tools use force fields to estimate a score for ranking best poses that favored better interactions between the two molecules.
In general, docking protocols are used to predict the interactions between small molecules and proteins. However, docking also can be used to detect associations and binding modes among proteins, peptides, DNA or RNA molecules, carbohydrates, and other macromolecules.
Virtual screening
Virtual screening (VS) is a computational approach used for fast screening of large compound libraries for drug discovery. Usually, virtual screening uses docking algorithms to rank small molecules with the highest affinity to a target receptor.
In recent times, several tools have been used to evaluate the use of virtual screening in the process of discovering new drugs. However, problems such as missing information, inaccurate understanding of drug-like molecular properties, weak scoring functions, or insufficient docking strategies hinder the docking process. Hence, the literature has described that it is still not considered a mature technology.
Molecular dynamics
Molecular dynamics (MD) is a computational method for simulating interactions between molecules and their atoms during a given period of time. This method allows the observation of the behavior of molecules and their interactions, considering the system as a whole. To calculate the behavior of the systems and, thus, determine the trajectories, an MD can use Newton's equation of motion, in addition to using molecular mechanics methods to estimate the forces that occur between particles (force fields).
Applications
Informatics approaches used in structural bioinformatics are:
Selection of Target - Potential targets are identified by comparing them with databases of known structures and sequence. The importance of a target can be decided on the basis of published literature. Target can also be selected on the basis of its protein domain. Protein domains are building blocks that can be rearranged to form new proteins. They can be studied in isolation initially.
Tracking X-ray crystallography trials - X-Ray crystallography can be used to reveal three-dimensional structure of a protein. But, in order to use X-ray for studying protein crystals, pure proteins crystals must be formed, which can take a lot of trials. This leads to a need for tracking the conditions and results of trials. Furthermore, supervised machine learning algorithms can be used on the stored data to identify conditions that might increase the yield of pure crystals.
Analysis of X-Ray crystallographic data - The diffraction pattern obtained as a result of bombarding X-rays on electrons is Fourier transform of electron density distribution. There is a need for algorithms that can deconvolve Fourier transform with partial information ( due to missing phase information, as the detectors can only measure amplitude of diffracted X-rays, and not the phase shifts ). Extrapolation technique such as Multiwavelength anomalous dispersion can be used to generate electron density map, which uses the location of selenium atoms as a reference to determine rest of the structure. Standard Ball-and-stick model is generated from the electron density map.
Analysis of NMR spectroscopy data - Nuclear magnetic resonance spectroscopy experiments produce two (or higher) dimensional data, with each peak corresponding to a chemical group within the sample. Optimization methods are used to convert spectra into three dimensional structures.
Correlating Structural information with functional information - Structural studies can be used as probe for structural-functional relationship.
Tools
See also
References
Further reading | 0.804788 | 0.960363 | 0.772888 |
Calorimetry | In chemistry and thermodynamics, calorimetry is the science or act of measuring changes in state variables of a body for the purpose of deriving the heat transfer associated with changes of its state due, for example, to chemical reactions, physical changes, or phase transitions under specified constraints. Calorimetry is performed with a calorimeter. Scottish physician and scientist Joseph Black, who was the first to recognize the distinction between heat and temperature, is said to be the founder of the science of calorimetry.
Indirect calorimetry calculates heat that living organisms produce by measuring either their production of carbon dioxide and nitrogen waste (frequently ammonia in aquatic organisms, or urea in terrestrial ones), or from their consumption of oxygen. Lavoisier noted in 1780 that heat production can be predicted from oxygen consumption this way, using multiple regression. The dynamic energy budget theory explains why this procedure is correct. Heat generated by living organisms may also be measured by direct calorimetry, in which the entire organism is placed inside the calorimeter for the measurement.
A widely used modern instrument is the differential scanning calorimeter, a device which allows thermal data to be obtained on small amounts of material. It involves heating the sample at a controlled rate and recording the heat flow either into or from the specimen.
Classical calorimetric calculation of heat
Cases with differentiable equation of state for a one-component body
Basic classical calculation with respect to volume
Calorimetry requires that a reference material that changes temperature have known definite thermal constitutive properties. The classical rule, recognized by Clausius and Kelvin, is that the pressure exerted by the calorimetric material is fully and rapidly determined solely by its temperature and volume; this rule is for changes that do not involve phase change, such as melting of ice. There are many materials that do not comply with this rule, and for them, the present formula of classical calorimetry does not provide an adequate account. Here the classical rule is assumed to hold for the calorimetric material being used, and the propositions are mathematically written:
The thermal response of the calorimetric material is fully described by its pressure as the value of its constitutive function of just the volume and the temperature . All increments are here required to be very small. This calculation refers to a domain of volume and temperature of the body in which no phase change occurs, and there is only one phase present. An important assumption here is continuity of property relations. A different analysis is needed for phase change
When a small increment of heat is gained by a calorimetric body, with small increments, of its volume, and of its temperature, the increment of heat, , gained by the body of calorimetric material, is given by
where
denotes the latent heat with respect to volume, of the calorimetric material at constant controlled temperature . The surroundings' pressure on the material is instrumentally adjusted to impose a chosen volume change, with initial volume . To determine this latent heat, the volume change is effectively the independently instrumentally varied quantity. This latent heat is not one of the widely used ones, but is of theoretical or conceptual interest.
denotes the heat capacity, of the calorimetric material at fixed constant volume , while the pressure of the material is allowed to vary freely, with initial temperature . The temperature is forced to change by exposure to a suitable heat bath. It is customary to write simply as , or even more briefly as . This latent heat is one of the two widely used ones.
The latent heat with respect to volume is the heat required for unit increment in volume at constant temperature. It can be said to be 'measured along an isotherm', and the pressure the material exerts is allowed to vary freely, according to its constitutive law . For a given material, it can have a positive or negative sign or exceptionally it can be zero, and this can depend on the temperature, as it does for water about 4 C. The concept of latent heat with respect to volume was perhaps first recognized by Joseph Black in 1762. The term 'latent heat of expansion' is also used. The latent heat with respect to volume can also be called the 'latent energy with respect to volume'. For all of these usages of 'latent heat', a more systematic terminology uses 'latent heat capacity'.
The heat capacity at constant volume is the heat required for unit increment in temperature at constant volume. It can be said to be 'measured along an isochor', and again, the pressure the material exerts is allowed to vary freely. It always has a positive sign. This means that for an increase in the temperature of a body without change of its volume, heat must be supplied to it. This is consistent with common experience.
Quantities like are sometimes called 'curve differentials', because they are measured along curves in the surface.
Classical theory for constant-volume (isochoric) calorimetry
Constant-volume calorimetry is calorimetry performed at a constant volume. This involves the use of a constant-volume calorimeter. Heat is still measured by the above-stated principle of calorimetry.
This means that in a suitably constructed calorimeter, called a bomb calorimeter, the increment of volume can be made to vanish, . For constant-volume calorimetry:
where
denotes the increment in temperature and
denotes the heat capacity at constant volume.
Classical heat calculation with respect to pressure
From the above rule of calculation of heat with respect to volume, there follows one with respect to pressure.
In a process of small increments, of its pressure, and of its temperature, the increment of heat, , gained by the body of calorimetric material, is given by
where
denotes the latent heat with respect to pressure, of the calorimetric material at constant temperature, while the volume and pressure of the body are allowed to vary freely, at pressure and temperature ;
denotes the heat capacity, of the calorimetric material at constant pressure, while the temperature and volume of the body are allowed to vary freely, at pressure and temperature . It is customary to write simply as , or even more briefly as .
The new quantities here are related to the previous ones:
where
denotes the partial derivative of with respect to evaluated for
and
denotes the partial derivative of with respect to evaluated for .
The latent heats and are always of opposite sign.
It is common to refer to the ratio of specific heats as
often just written as .
Calorimetry through phase change, equation of state shows one jump discontinuity
An early calorimeter was that used by Laplace and Lavoisier, as shown in the figure above. It worked at constant temperature, and at atmospheric pressure. The latent heat involved was then not a latent heat with respect to volume or with respect to pressure, as in the above account for calorimetry without phase change. The latent heat involved in this calorimeter was with respect to phase change, naturally occurring at constant temperature. This kind of calorimeter worked by measurement of mass of water produced by the melting of ice, which is a phase change.
Cumulation of heating
For a time-dependent process of heating of the calorimetric material, defined by a continuous joint progression of and , starting at time and ending at time , there can be calculated an accumulated quantity of heat delivered, . This calculation is done by mathematical integration along the progression with respect to time. This is because increments of heat are 'additive'; but this does not mean that heat is a conservative quantity. The idea that heat was a conservative quantity was invented by Lavoisier, and is called the 'caloric theory'; by the middle of the nineteenth century it was recognized as mistaken. Written with the symbol , the quantity is not at all restricted to be an increment with very small values; this is in contrast with .
One can write
.
This expression uses quantities such as which are defined in the section below headed 'Mathematical aspects of the above rules'.
Mathematical aspects of the above rules
The use of 'very small' quantities such as is related to the physical requirement for the quantity to be 'rapidly determined' by and ; such 'rapid determination' refers to a physical process. These 'very small' quantities are used in the Leibniz approach to the infinitesimal calculus. The Newton approach uses instead 'fluxions' such as , which makes it more obvious that must be 'rapidly determined'.
In terms of fluxions, the above first rule of calculation can be written
where
denotes the time
denotes the time rate of heating of the calorimetric material at time
denotes the time rate of change of volume of the calorimetric material at time
denotes the time rate of change of temperature of the calorimetric material.
The increment and the fluxion are obtained for a particular time that determines the values of the quantities on the righthand sides of the above rules. But this is not a reason to expect that there should exist a mathematical function . For this reason, the increment is said to be an 'imperfect differential' or an 'inexact differential'. Some books indicate this by writing instead of . Also, the notation đQ is used in some books. Carelessness about this can lead to error.
The quantity is properly said to be a functional of the continuous joint progression of and , but, in the mathematical definition of a function, is not a function of . Although the fluxion is defined here as a function of time , the symbols and respectively standing alone are not defined here.
Physical scope of the above rules of calorimetry
The above rules refer only to suitable calorimetric materials. The terms 'rapidly' and 'very small' call for empirical physical checking of the domain of validity of the above rules.
The above rules for the calculation of heat belong to pure calorimetry. They make no reference to thermodynamics, and were mostly understood before the advent of thermodynamics. They are the basis of the 'thermo' contribution to thermodynamics. The 'dynamics' contribution is based on the idea of work, which is not used in the above rules of calculation.
Experimentally conveniently measured coefficients
Empirically, it is convenient to measure properties of calorimetric materials under experimentally controlled conditions.
Pressure increase at constant volume
For measurements at experimentally controlled volume, one can use the assumption, stated above, that the pressure of the body of calorimetric material is can be expressed as a function of its volume and temperature.
For measurement at constant experimentally controlled volume, the isochoric coefficient of pressure rise with temperature, is defined by
Expansion at constant pressure
For measurements at experimentally controlled pressure, it is assumed that the volume of the body of calorimetric material can be expressed as a function of its temperature and pressure . This assumption is related to, but is not the same as, the above used assumption that the pressure of the body of calorimetric material is known as a function of its volume and temperature; anomalous behaviour of materials can affect this relation.
The quantity that is conveniently measured at constant experimentally controlled pressure, the isobar volume expansion coefficient, is defined by
Compressibility at constant temperature
For measurements at experimentally controlled temperature, it is again assumed that the volume of the body of calorimetric material can be expressed as a function of its temperature and pressure , with the same provisos as mentioned just above.
The quantity that is conveniently measured at constant experimentally controlled temperature, the isothermal compressibility, is defined by
Relation between classical calorimetric quantities
Assuming that the rule is known, one can derive the function of that is used above in the classical heat calculation with respect to pressure. This function can be found experimentally from the coefficients and through the mathematically deducible relation
.
Connection between calorimetry and thermodynamics
Thermodynamics developed gradually over the first half of the nineteenth century, building on the above theory of calorimetry which had been worked out before it, and on other discoveries. According to Gislason and Craig (2005): "Most thermodynamic data come from calorimetry..." According to Kondepudi (2008): "Calorimetry is widely used in present day laboratories."
In terms of thermodynamics, the internal energy of the calorimetric material can be considered as the value of a function of , with partial derivatives and .
Then it can be shown that one can write a thermodynamic version of the above calorimetric rules:
with
and
.
Again, further in terms of thermodynamics, the internal energy of the calorimetric material can sometimes, depending on the calorimetric material, be considered as the value of a function of , with partial derivatives and , and with being expressible as the value of a function of , with partial derivatives and .
Then, according to Adkins (1975), it can be shown that one can write a further thermodynamic version of the above calorimetric rules:
with
and
.
Beyond the calorimetric fact noted above that the latent heats and are always of opposite sign, it may be shown, using the thermodynamic concept of work, that also
Special interest of thermodynamics in calorimetry: the isothermal segments of a Carnot cycle
Calorimetry has a special benefit for thermodynamics. It tells about the heat absorbed or emitted in the isothermal segment of a Carnot cycle.
A Carnot cycle is a special kind of cyclic process affecting a body composed of material suitable for use in a heat engine. Such a material is of the kind considered in calorimetry, as noted above, that exerts a pressure that is very rapidly determined just by temperature and volume. Such a body is said to change reversibly. A Carnot cycle consists of four successive stages or segments:
(1) a change in volume from a volume to a volume at constant temperature so as to incur a flow of heat into the body (known as an isothermal change)
(2) a change in volume from to a volume at a variable temperature just such as to incur no flow of heat (known as an adiabatic change)
(3) another isothermal change in volume from to a volume at constant temperature such as to incur a flow or heat out of the body and just such as to precisely prepare for the following change
(4) another adiabatic change of volume from back to just such as to return the body to its starting temperature .
In isothermal segment (1), the heat that flows into the body is given by
and in isothermal segment (3) the heat that flows out of the body is given by
.
Because the segments (2) and (4) are adiabats, no heat flows into or out of the body during them, and consequently the net heat supplied to the body during the cycle is given by
.
This quantity is used by thermodynamics and is related in a special way to the net work done by the body during the Carnot cycle. The net change of the body's internal energy during the Carnot cycle, , is equal to zero, because the material of the working body has the special properties noted above.
Special interest of calorimetry in thermodynamics: relations between classical calorimetric quantities
Relation of latent heat with respect to volume, and the equation of state
The quantity , the latent heat with respect to volume, belongs to classical calorimetry. It accounts for the occurrence of energy transfer by work in a process in which heat is also transferred; the quantity, however, was considered before the relation between heat and work transfers was clarified by the invention of thermodynamics. In the light of thermodynamics, the classical calorimetric quantity is revealed as being tightly linked to the calorimetric material's equation of state . Provided that the temperature is measured in the thermodynamic absolute scale, the relation is expressed in the formula
.
Difference of specific heats
Advanced thermodynamics provides the relation
.
From this, further mathematical and thermodynamic reasoning leads to another relation between classical calorimetric quantities. The difference of specific heats is given by
.
Practical constant-volume calorimetry (bomb calorimetry) for thermodynamic studies
Constant-volume calorimetry is calorimetry performed at a constant volume. This involves the use of a constant-volume calorimeter.
No work is performed in constant-volume calorimetry, so the heat measured equals the change in internal energy of the system. The heat capacity at constant volume is assumed to be independent of temperature.
Heat is measured by the principle of calorimetry.
where
ΔU is change in internal energy,
ΔT is change in temperature and
CV is the heat capacity at constant volume.
In constant-volume calorimetry the pressure is not held constant. If there is a pressure difference between initial and final states, the heat measured needs adjustment to provide the enthalpy change. One then has
where
ΔH is change in enthalpy and
V is the unchanging volume of the sample chamber.
See also
Isothermal microcalorimetry (IMC)
Isothermal titration calorimetry
Sorption calorimetry
Reaction calorimeter
References
Books
.
External links
Heat transfer | 0.78098 | 0.989576 | 0.772839 |
Automatic differentiation | In mathematics and computer algebra, automatic differentiation (auto-differentiation, autodiff, or AD), also called algorithmic differentiation, computational differentiation, is a set of techniques to evaluate the partial derivative of a function specified by a computer program.
Automatic differentiation exploits the fact that every computer calculation, no matter how complicated, executes a sequence of elementary arithmetic operations (addition, subtraction, multiplication, division, etc.) and elementary functions (exp, log, sin, cos, etc.). By applying the chain rule repeatedly to these operations, partial derivatives of arbitrary order can be computed automatically, accurately to working precision, and using at most a small constant factor of more arithmetic operations than the original program.
Difference from other differentiation methods
Automatic differentiation is distinct from symbolic differentiation and numerical differentiation.
Symbolic differentiation faces the difficulty of converting a computer program into a single mathematical expression and can lead to inefficient code. Numerical differentiation (the method of finite differences) can introduce round-off errors in the discretization process and cancellation. Both of these classical methods have problems with calculating higher derivatives, where complexity and errors increase. Finally, both of these classical methods are slow at computing partial derivatives of a function with respect to many inputs, as is needed for gradient-based optimization algorithms. Automatic differentiation solves all of these problems.
Applications
Automatic differentiation is particularly important in the field of machine learning. For example, it allows one to implement backpropagation in a neural network without a manually-computed derivative.
Forward and reverse accumulation
Chain rule of partial derivatives of composite functions
Fundamental to automatic differentiation is the decomposition of differentials provided by the chain rule of partial derivatives of composite functions. For the simple composition
the chain rule gives
Two types of automatic differentiation
Usually, two distinct modes of automatic differentiation are presented.
forward accumulation (also called bottom-up, forward mode, or tangent mode)
reverse accumulation (also called top-down, reverse mode, or adjoint mode)
Forward accumulation specifies that one traverses the chain rule from inside to outside (that is, first compute and then and at last ), while reverse accumulation has the traversal from outside to inside (first compute and then and at last ). More succinctly,
Forward accumulation computes the recursive relation: with , and,
Reverse accumulation computes the recursive relation: with .
The value of the partial derivative, called seed, is propagated forward or backward and is initially or . Forward accumulation evaluates the function and calculates the derivative with respect to one independent variable in one pass. For each independent variable a separate pass is therefore necessary in which the derivative with respect to that independent variable is set to one and of all others to zero. In contrast, reverse accumulation requires the evaluated partial functions for the partial derivatives. Reverse accumulation therefore evaluates the function first and calculates the derivatives with respect to all independent variables in an additional pass.
Which of these two types should be used depends on the sweep count. The computational complexity of one sweep is proportional to the complexity of the original code.
Forward accumulation is more efficient than reverse accumulation for functions with as only sweeps are necessary, compared to sweeps for reverse accumulation.
Reverse accumulation is more efficient than forward accumulation for functions with as only sweeps are necessary, compared to sweeps for forward accumulation.
Backpropagation of errors in multilayer perceptrons, a technique used in machine learning, is a special case of reverse accumulation.
Forward accumulation was introduced by R.E. Wengert in 1964. According to Andreas Griewank, reverse accumulation has been suggested since the late 1960s, but the inventor is unknown. Seppo Linnainmaa published reverse accumulation in 1976.
Forward accumulation
In forward accumulation AD, one first fixes the independent variable with respect to which differentiation is performed and computes the derivative of each sub-expression recursively. In a pen-and-paper calculation, this involves repeatedly substituting the derivative of the inner functions in the chain rule:
This can be generalized to multiple variables as a matrix product of Jacobians.
Compared to reverse accumulation, forward accumulation is natural and easy to implement as the flow of derivative information coincides with the order of evaluation. Each variable is augmented with its derivative (stored as a numerical value, not a symbolic expression),
as denoted by the dot. The derivatives are then computed in sync with the evaluation steps and combined with other derivatives via the chain rule.
Using the chain rule, if has predecessors in the computational graph:
As an example, consider the function:
For clarity, the individual sub-expressions have been labeled with the variables .
The choice of the independent variable to which differentiation is performed affects the seed values and . Given interest in the derivative of this function with respect to , the seed values should be set to:
With the seed values set, the values propagate using the chain rule as shown. Figure 2 shows a pictorial depiction of this process as a computational graph.
{| class="wikitable"
!Operations to compute value !!Operations to compute derivative
|-
| || (seed)
|-
| || (seed)
|-
| ||
|-
| ||
|-
| ||
|}
To compute the gradient of this example function, which requires not only but also , an additional sweep is performed over the computational graph using the seed values .
Implementation
Pseudocode
Forward accumulation calculates the function and the derivative (but only for one independent variable each) in one pass. The associated method call expects the expression Z to be derived with regard to a variable V. The method returns a pair of the evaluated function and its derivative. The method traverses the expression tree recursively until a variable is reached. If the derivative with respect to this variable is requested, its derivative is 1, 0 otherwise. Then the partial function as well as the partial derivative are evaluated.
tuple<float,float> evaluateAndDerive(Expression Z, Variable V) {
if isVariable(Z)
if (Z = V) return {valueOf(Z), 1};
else return {valueOf(Z), 0};
else if (Z = A + B)
{a, a'} = evaluateAndDerive(A, V);
{b, b'} = evaluateAndDerive(B, V);
return {a + b, a' + b'};
else if (Z = A - B)
{a, a'} = evaluateAndDerive(A, V);
{b, b'} = evaluateAndDerive(B, V);
return {a - b, a' - b'};
else if (Z = A * B)
{a, a'} = evaluateAndDerive(A, V);
{b, b'} = evaluateAndDerive(B, V);
return {a * b, b * a' + a * b'};
}
C++
#include <iostream>
struct ValueAndPartial { float value, partial; };
struct Variable;
struct Expression {
virtual ValueAndPartial evaluateAndDerive(Variable *variable) = 0;
};
struct Variable: public Expression {
float value;
Variable(float value): value(value) {}
ValueAndPartial evaluateAndDerive(Variable *variable) {
float partial = (this == variable) ? 1.0f : 0.0f;
return {value, partial};
}
};
struct Plus: public Expression {
Expression *a, *b;
Plus(Expression *a, Expression *b): a(a), b(b) {}
ValueAndPartial evaluateAndDerive(Variable *variable) {
auto [valueA, partialA] = a->evaluateAndDerive(variable);
auto [valueB, partialB] = b->evaluateAndDerive(variable);
return {valueA + valueB, partialA + partialB};
}
};
struct Multiply: public Expression {
Expression *a, *b;
Multiply(Expression *a, Expression *b): a(a), b(b) {}
ValueAndPartial evaluateAndDerive(Variable *variable) {
auto [valueA, partialA] = a->evaluateAndDerive(variable);
auto [valueB, partialB] = b->evaluateAndDerive(variable);
return {valueA * valueB, valueB * partialA + valueA * partialB};
}
};
int main {
// Example: Finding the partials of z = x * (x + y) + y * y at (x, y) = (2, 3)
Variable x(2), y(3);
Plus p1(&x, &y); Multiply m1(&x, &p1); Multiply m2(&y, &y); Plus z(&m1, &m2);
float xPartial = z.evaluateAndDerive(&x).partial;
float yPartial = z.evaluateAndDerive(&y).partial;
std::cout << "∂z/∂x = " << xPartial << ", "
<< "∂z/∂y = " << yPartial << std::endl;
// Output: ∂z/∂x = 7, ∂z/∂y = 8
return 0;
}
Reverse accumulation
In reverse accumulation AD, the dependent variable to be differentiated is fixed and the derivative is computed with respect to each sub-expression recursively. In a pen-and-paper calculation, the derivative of the outer functions is repeatedly substituted in the chain rule:
In reverse accumulation, the quantity of interest is the adjoint, denoted with a bar ; it is a derivative of a chosen dependent variable with respect to a subexpression :
Using the chain rule, if has successors in the computational graph:
Reverse accumulation traverses the chain rule from outside to inside, or in the case of the computational graph in Figure 3, from top to bottom. The example function is scalar-valued, and thus there is only one seed for the derivative computation, and only one sweep of the computational graph is needed to calculate the (two-component) gradient. This is only half the work when compared to forward accumulation, but reverse accumulation requires the storage of the intermediate variables as well as the instructions that produced them in a data structure known as a "tape" or a Wengert list (however, Wengert published forward accumulation, not reverse accumulation), which may consume significant memory if the computational graph is large. This can be mitigated to some extent by storing only a subset of the intermediate variables and then reconstructing the necessary work variables by repeating the evaluations, a technique known as rematerialization. Checkpointing is also used to save intermediary states.
The operations to compute the derivative using reverse accumulation are shown in the table below (note the reversed order):
The data flow graph of a computation can be manipulated to calculate the gradient of its original calculation. This is done by adding an adjoint node for each primal node, connected by adjoint edges which parallel the primal edges but flow in the opposite direction. The nodes in the adjoint graph represent multiplication by the derivatives of the functions calculated by the nodes in the primal. For instance, addition in the primal causes fanout in the adjoint; fanout in the primal causes addition in the adjoint; a unary function in the primal causes in the adjoint; etc.
Implementation
Pseudo code
Reverse accumulation requires two passes: In the forward pass, the function is evaluated first and the partial results are cached. In the reverse pass, the partial derivatives are calculated and the previously derived value is backpropagated. The corresponding method call expects the expression Z to be derived and seed with the derived value of the parent expression. For the top expression, Z derived with regard to Z, this is 1. The method traverses the expression tree recursively until a variable is reached and adds the current seed value to the derivative expression.
void derive(Expression Z, float seed) {
if isVariable(Z)
partialDerivativeOf(Z) += seed;
else if (Z = A + B)
derive(A, seed);
derive(B, seed);
else if (Z = A - B)
derive(A, seed);
derive(B, -seed);
else if (Z = A * B)
derive(A, valueOf(B) * seed);
derive(B, valueOf(A) * seed);
}
C++
#include <iostream>
struct Expression {
float value;
virtual void evaluate() = 0;
virtual void derive(float seed) = 0;
};
struct Variable: public Expression {
float partial;
Variable(float value) {
this->value = value;
partial = 0.0f;
}
void evaluate() {}
void derive(float seed) {
partial += seed;
}
};
struct Plus: public Expression {
Expression *a, *b;
Plus(Expression *a, Expression *b): a(a), b(b) {}
void evaluate() {
a->evaluate();
b->evaluate();
value = a->value + b->value;
}
void derive(float seed) {
a->derive(seed);
b->derive(seed);
}
};
struct Multiply: public Expression {
Expression *a, *b;
Multiply(Expression *a, Expression *b): a(a), b(b) {}
void evaluate() {
a->evaluate();
b->evaluate();
value = a->value * b->value;
}
void derive(float seed) {
a->derive(b->value * seed);
b->derive(a->value * seed);
}
};
int main {
// Example: Finding the partials of z = x * (x + y) + y * y at (x, y) = (2, 3)
Variable x(2), y(3);
Plus p1(&x, &y); Multiply m1(&x, &p1); Multiply m2(&y, &y); Plus z(&m1, &m2);
z.evaluate();
std::cout << "z = " << z.value << std::endl;
// Output: z = 19
z.derive(1);
std::cout << "∂z/∂x = " << x.partial << ", "
<< "∂z/∂y = " << y.partial << std::endl;
// Output: ∂z/∂x = 7, ∂z/∂y = 8
return 0;
}
Beyond forward and reverse accumulation
Forward and reverse accumulation are just two (extreme) ways of traversing the chain rule. The problem of computing a full Jacobian of with a minimum number of arithmetic operations is known as the optimal Jacobian accumulation (OJA) problem, which is NP-complete. Central to this proof is the idea that algebraic dependencies may exist between the local partials that label the edges of the graph. In particular, two or more edge labels may be recognized as equal. The complexity of the problem is still open if it is assumed that all edge labels are unique and algebraically independent.
Automatic differentiation using dual numbers
Forward mode automatic differentiation is accomplished by augmenting the algebra of real numbers and obtaining a new arithmetic. An additional component is added to every number to represent the derivative of a function at the number, and all arithmetic operators are extended for the augmented algebra. The augmented algebra is the algebra of dual numbers.
Replace every number with the number , where is a real number, but is an abstract number with the property (an infinitesimal; see Smooth infinitesimal analysis). Using only this, regular arithmetic gives
using .
Now, polynomials can be calculated in this augmented arithmetic. If , then
where denotes the derivative of with respect to its first argument, and , called a seed, can be chosen arbitrarily.
The new arithmetic consists of ordered pairs, elements written , with ordinary arithmetics on the first component, and first order differentiation arithmetic on the second component, as described above. Extending the above results on polynomials to analytic functions gives a list of the basic arithmetic and some standard functions for the new arithmetic:
and in general for the primitive function ,
where and are the derivatives of with respect to its first and second arguments, respectively.
When a binary basic arithmetic operation is applied to mixed arguments—the pair and the real number —the real number is first lifted to . The derivative of a function at the point is now found by calculating using the above arithmetic, which gives as the result.
Implementation
An example implementation based on the dual number approach follows.
Pseudo code
C++
#include <iostream>
struct Dual {
float realPart, infinitesimalPart;
Dual(float realPart, float infinitesimalPart=0): realPart(realPart), infinitesimalPart(infinitesimalPart) {}
Dual operator+(Dual other) {
return Dual(
realPart + other.realPart,
infinitesimalPart + other.infinitesimalPart
);
}
Dual operator*(Dual other) {
return Dual(
realPart * other.realPart,
other.realPart * infinitesimalPart + realPart * other.infinitesimalPart
);
}
};
// Example: Finding the partials of z = x * (x + y) + y * y at (x, y) = (2, 3)
Dual f(Dual x, Dual y) { return x * (x + y) + y * y; }
int main {
Dual x = Dual(2);
Dual y = Dual(3);
Dual epsilon = Dual(0, 1);
Dual a = f(x + epsilon, y);
Dual b = f(x, y + epsilon);
std::cout << "∂z/∂x = " << a.infinitesimalPart << ", "
<< "∂z/∂y = " << b.infinitesimalPart << std::endl;
// Output: ∂z/∂x = 7, ∂z/∂y = 8
return 0;
}
Vector arguments and functions
Multivariate functions can be handled with the same efficiency and mechanisms as univariate functions by adopting a directional derivative operator. That is, if it is sufficient to compute , the directional derivative of at in the direction may be calculated as using the same arithmetic as above. If all the elements of are desired, then function evaluations are required. Note that in many optimization applications, the directional derivative is indeed sufficient.
High order and many variables
The above arithmetic can be generalized to calculate second order and higher derivatives of multivariate functions. However, the arithmetic rules quickly grow complicated: complexity is quadratic in the highest derivative degree. Instead, truncated Taylor polynomial algebra can be used. The resulting arithmetic, defined on generalized dual numbers, allows efficient computation using functions as if they were a data type. Once the Taylor polynomial of a function is known, the derivatives are easily extracted.
Implementation
Forward-mode AD is implemented by a nonstandard interpretation of the program in which real numbers are replaced by dual numbers, constants are lifted to dual numbers with a zero epsilon coefficient, and the numeric primitives are lifted to operate on dual numbers. This nonstandard interpretation is generally implemented using one of two strategies: source code transformation or operator overloading.
Source code transformation (SCT)
The source code for a function is replaced by an automatically generated source code that includes statements for calculating the derivatives interleaved with the original instructions.
Source code transformation can be implemented for all programming languages, and it is also easier for the compiler to do compile time optimizations. However, the implementation of the AD tool itself is more difficult and the build system is more complex.
Operator overloading (OO)
Operator overloading is a possibility for source code written in a language supporting it. Objects for real numbers and elementary mathematical operations must be overloaded to cater for the augmented arithmetic depicted above. This requires no change in the form or sequence of operations in the original source code for the function to be differentiated, but often requires changes in basic data types for numbers and vectors to support overloading and often also involves the insertion of special flagging operations. Due to the inherent operator overloading overhead on each loop, this approach usually demonstrates weaker speed performance.
Operator overloading and source code transformation
Overloaded Operators can be used to extract the valuation graph, followed by automatic generation of the AD-version of the primal function at run-time. Unlike the classic OO AAD, such AD-function does not change from one iteration to the next one. Hence there is any OO or tape interpretation run-time overhead per Xi sample.
With the AD-function being generated at runtime, it can be optimised to take into account the current state of the program and precompute certain values. In addition, it can be generated in a way to consistently utilize native CPU vectorization to process 4(8)-double chunks of user data (AVX2\AVX512 speed up x4-x8). With multithreading added into account, such approach can lead to a final acceleration of order 8 × #Cores compared to the traditional AAD tools. A reference implementation is available on GitHub.
See also
Differentiable programming
Notes
References
Further reading
External links
www.autodiff.org, An "entry site to everything you want to know about automatic differentiation"
Automatic Differentiation of Parallel OpenMP Programs
Automatic Differentiation, C++ Templates and Photogrammetry
Automatic Differentiation, Operator Overloading Approach
Compute analytic derivatives of any Fortran77, Fortran95, or C program through a web-based interface Automatic Differentiation of Fortran programs
Description and example code for forward Automatic Differentiation in Scala
finmath-lib stochastic automatic differentiation, Automatic differentiation for random variables (Java implementation of the stochastic automatic differentiation).
Adjoint Algorithmic Differentiation: Calibration and Implicit Function Theorem
C++ Template-based automatic differentiation article and implementation
Tangent Source-to-Source Debuggable Derivatives
Exact First- and Second-Order Greeks by Algorithmic Differentiation
Adjoint Algorithmic Differentiation of a GPU Accelerated Application
Adjoint Methods in Computational Finance Software Tool Support for Algorithmic Differentiationop
More than a Thousand Fold Speed Up for xVA Pricing Calculations with Intel Xeon Scalable Processors
Sparse truncated Taylor series implementation with VBIC95 example for higher order derivatives
Differential calculus
Computer algebra
Articles with example pseudocode
Articles with example Python (programming language) code
Articles with example C++ code | 0.775362 | 0.996675 | 0.772784 |
Green chemistry metrics | Green chemistry metrics describe aspects of a chemical process relating to the principles of green chemistry. The metrics serve to quantify the efficiency or environmental performance of chemical processes, and allow changes in performance to be measured. The motivation for using metrics is the expectation that quantifying technical and environmental improvements can make the benefits of new technologies more tangible, perceptible, or understandable. This, in turn, is likely to aid the communication of research and potentially facilitate the wider adoption of green chemistry technologies in industry.
For a non-chemist, an understandable method of describing the improvement might be a decrease of X unit cost per kilogram of compound Y. This, however, might be an over-simplification. For example, it would not allow a chemist to visualize the improvement made or to understand changes in material toxicity and process hazards. For yield improvements and selectivity increases, simple percentages are suitable, but this simplistic approach may not always be appropriate. For example, when a highly pyrophoric reagent is replaced by a benign one, a numerical value is difficult to assign but the improvement is obvious, if all other factors are similar.
Numerous metrics have been formulated over time. A general problem is that the more accurate and universally applicable the metric devised, the more complex and unemployable it becomes. A good metric must be clearly defined, simple, measurable, objective rather than subjective and must ultimately drive the desired behavior.
Mass-based versus impact-based metrics
The fundamental purpose of metrics is to allow comparisons. If there are several economically viable ways to make a product, which one causes the least environmental harm (i.e. which is the greenest)? The metrics that have been developed to achieve that purpose fall into two groups: mass-based metrics and impact-based metrics.
The simplest metrics are based upon the mass of materials rather than their impact. Atom economy, E-factor, yield, reaction mass efficiency and effective mass efficiency are all metrics that compare the mass of desired product to the mass of waste. They do not differentiate between more harmful and less harmful wastes. A process that produces less waste may appear to be greener than the alternatives according to mass-based metrics but may in fact be less green if the waste produced is particularly harmful to the environment. This serious limitation means that mass-based metrics can not be used to determine which synthetic method is greener. However, mass-based metrics have the great advantage of simplicity: they can be calculated from readily available data with few assumptions. For companies that produce thousands of products, mass-based metrics may be the only viable choice for monitoring company-wide reductions in environmental harm.
In contrast, impact-based metrics such as those used in life-cycle assessment evaluate environmental impact as well as mass, making them much more suitable for selecting the greenest of several options or synthetic pathways. Some of them, such as those for acidification, ozone depletion, and resource depletion, are just as easy to calculate as mass-based metrics but require emissions data that may not be readily available. Others, such as those for inhalation toxicity, ingestion toxicity, and various forms of aquatic eco toxicity, are more complex to calculate in addition to requiring emissions data.
Atom economy
Atom economy was designed by Barry Trost as a framework by which organic chemists would pursue “greener” chemistry. The atom economy number is how much of the reactants remain in the final product.
For a generic multi-stage reaction used for producing R:
A + B → P + X
P + C → Q + Y
Q + D → R + Z
The atom economy is calculated by
The conservation of mass principle dictates that the total mass of the reactants is the same as the total mass of the products. In the above example, the sum of molecular masses of A, B, C and D should be equal to that of R, X, Y and Z. As only R is the useful product, the atoms of X, Y and Z are said to be wasted as by-products. Economic and environmental costs of disposal of these waste make a reaction with low atom economy to be "less green".
A further simplified version of this is the carbon economy. It is how much carbon ends up in the useful product compared to how much carbon was used to create the product.
This metric is a good simplification for use in the pharmaceutical industry as it takes into account the stoichiometry of reactants and products. Furthermore, this metric is of interest to the pharmaceutical industry where development of carbon skeletons is key to their work.
The atom economy calculation is a simple representation of the “greenness” of a reaction as it can be carried out without the need for experimental results. Nevertheless, it can be useful in the process synthesis early stage design.
The drawback of this type of analysis is that assumptions have to be made. In an ideal chemical process, the amount of starting materials or reactants equals the amount of all products generated and no atom is lost. However, in most processes, some of the consumed reactant atoms do not become part of the products, but remain as unreacted reactants, or are lost in some side reactions. Besides, solvents and energy used for the reaction are ignored in this calculation, but they may have non-negligible impacts to the environment.
Percentage yield
Percentage yield is calculated by dividing the amount of the obtained desired product by the theoretical yield. In a chemical process, the reaction is usually reversible, thus reactants are not completely converted into products; some reactants are also lost by undesired side reaction. To evaluate these losses of chemicals, actual yield has to be measured experimentally.
As percentage yield is affected by chemical equilibrium, allowing one or more reactants to be in great excess can increase the yield. However, this may not be considered as a "greener" method, as it implies a greater amount of the excess reactant remain unreacted and therefore wasted. To evaluate the use of excess reactants, the excess reactant factor can be calculated.
If this value is far greater than 1, then the excess reactants may be a large waste of chemicals and costs. This can be a concern when raw materials have high economic costs or environmental costs in extraction.
In addition, increasing the temperature can also increase the yield of some endothermic reactions, but at the expense of consuming more energy. Hence this may not be attractive methods as well.
Reaction mass efficiency
The reaction mass efficiency is the percentage of actual mass of desire product to the mass of all reactants used. It takes into account both atom economy and chemical yield.
Reaction mass efficiency, together with all metrics mentioned above, shows the “greenness” of a reaction but not of a process. Neither metric takes into account all waste produced. For example, these metrics could present a rearrangement as “very green” but fail to address any solvent, work-up, and energy issues that make the process less attractive.
Effective mass efficiency
A metric similar to reaction mass efficiency is the effective mass efficiency, as suggested by Hudlicky et al. It is defined as the percentage of the mass of the desired product relative to the mass of all non-benign reagents used in its synthesis. The reagents here may include any used reactant, solvent or catalyst.
Note that when most reagents are benign, the effective mass efficiency can be greater than 100%. This metric requires further definition of a benign substance. Hudlicky defines it as “those by-products, reagents or solvents that have no environmental risk associated with them, for example, water, low-concentration saline, dilute ethanol, autoclaved cell mass, etc.”. This definition leaves the metric open to criticism, as nothing is absolutely benign (which is a subjective term), and even the substances listed in the definition have some environmental impact associated with them. The formula also fails to address the level of toxicity associated with a process. Until all toxicology data is available for all chemicals and a term dealing with these levels of “benign” reagents is written into the formula, the effective mass efficiency is not the best metric for chemistry.
Environmental factor
The first general metric for green chemistry remains one of the most flexible and popular ones. Roger A. Sheldon’s environmental factor (E-factor) can be made as complex and thorough or as simple as desired and useful.
The E-factor of a process is the ratio of the mass of waste per mass of product:
As examples, Sheldon calculated E-factors of various industries:
It highlights the waste produced in the process as opposed to the reaction, thus helping those who try to fulfil one of the twelve principles of green chemistry to avoid waste production. E-factors can be combined to assess multi-step reactions step by step or in one calculation. E-factors ignore recyclable factors such as recycled solvents and re-used catalysts, which obviously increases the accuracy but ignores the energy involved in the recovery (these are often included theoretically by assuming 90% solvent recovery). The main difficulty with E-factors is the need to define system boundaries, for example, which stages of the production or product life-cycle to consider before calculations can be made.
This metric is simple to apply industrially, as a production facility can measure how much material enters the site and how much leaves as product and waste, thereby directly giving an accurate global E-factor for the site. Sheldon's analyses (see table) demonstrate that oil companies produce less waste than pharmaceuticals as a percentage of material processed. This reflects the fact that the profit margins in the oil industry require them to minimise waste and find uses for products which would normally be discarded as waste. By contrast the pharmaceutical sector is more focused on molecule manufacture and quality. The (currently) high profit margins within the sector mean that there is less concern about the comparatively large amounts of waste that are produced (especially considering the volumes used) although it has to be noted that, despite the percentage waste and E-factor being high, the pharmaceutical section produces much lower tonnage of waste than any other sector. This table encouraged a number of large pharmaceutical companies to commence “green” chemistry programs.
The EcoScale
The EcoScale metric was proposed in an article in the Beilstein Journal of Organic Chemistry in 2006 for evaluation of the effectiveness of a synthetic reaction. It is characterized by simplicity and general applicability. Like the yield-based scale, the EcoScale gives a score from 0 to 100, but also takes into account cost, safety, technical set-up, energy and purification aspects. It is obtained by assigning a value of 100 to an ideal reaction defined as "Compound A (substrate) undergoes a reaction with (or in the presence of)inexpensive compound(s) B to give the desired compound C in 100% yield at room temperature with a minimal risk for the operator and a minimal impact on the environment", and then subtracting penalty points for non-ideal conditions. These penalty points take into account both the advantages and disadvantages of specific reagents, set-ups and technologies.
References
General references
Green chemistry | 0.796893 | 0.969721 | 0.772764 |
Basis set (chemistry) | In theoretical and computational chemistry, a basis set is a set of functions (called basis functions) that is used to represent the electronic wave function in the Hartree–Fock method or density-functional theory in order to turn the partial differential equations of the model into algebraic equations suitable for efficient implementation on a computer.
The use of basis sets is equivalent to the use of an approximate resolution of the identity: the orbitals are expanded within the basis set as a linear combination of the basis functions , where the expansion coefficients are given by .
The basis set can either be composed of atomic orbitals (yielding the linear combination of atomic orbitals approach), which is the usual choice within the quantum chemistry community; plane waves which are typically used within the solid state community, or real-space approaches. Several types of atomic orbitals can be used: Gaussian-type orbitals, Slater-type orbitals, or numerical atomic orbitals. Out of the three, Gaussian-type orbitals are by far the most often used, as they allow efficient implementations of post-Hartree–Fock methods.
Introduction
In modern computational chemistry, quantum chemical calculations are performed using a finite set of basis functions. When the finite basis is expanded towards an (infinite) complete set of functions, calculations using such a basis set are said to approach the complete basis set (CBS) limit. In this context, basis function and atomic orbital are sometimes used interchangeably, although the basis functions are usually not true atomic orbitals.
Within the basis set, the wavefunction is represented as a vector, the components of which correspond to coefficients of the basis functions in the linear expansion. In such a basis, one-electron operators correspond to matrices (a.k.a. rank two tensors), whereas two-electron operators are rank four tensors.
When molecular calculations are performed, it is common to use a basis composed of atomic orbitals, centered at each nucleus within the molecule (linear combination of atomic orbitals ansatz). The physically best motivated basis set are Slater-type orbitals (STOs),
which are solutions to the Schrödinger equation of hydrogen-like atoms, and decay exponentially far away from the nucleus. It can be shown that the molecular orbitals of Hartree–Fock and density-functional theory also exhibit exponential decay. Furthermore, S-type STOs also satisfy Kato's cusp condition at the nucleus, meaning that they are able to accurately describe electron density near the nucleus. However, hydrogen-like atoms lack many-electron interactions, thus the orbitals do not accurately describe electron state correlations.
Unfortunately, calculating integrals with STOs is computationally difficult and it was later realized by Frank Boys that STOs could be approximated as linear combinations of Gaussian-type orbitals (GTOs) instead. Because the product of two GTOs can be written as a linear combination of GTOs, integrals with Gaussian basis functions can be written in closed form, which leads to huge computational savings (see John Pople).
Dozens of Gaussian-type orbital basis sets have been published in the literature. Basis sets typically come in hierarchies of increasing size, giving a controlled way to obtain more accurate solutions, however at a higher cost.
The smallest basis sets are called minimal basis sets. A minimal basis set is one in which, on each atom in the molecule, a single basis function is used for each orbital in a Hartree–Fock calculation on the free atom. For atoms such as lithium, basis functions of p type are also added to the basis functions that correspond to the 1s and 2s orbitals of the free atom, because lithium also has a 1s2p bound state. For example, each atom in the second period of the periodic system (Li – Ne) would have a basis set of five functions (two s functions and three p functions).
A minimal basis set may already be exact for the gas-phase atom at the self-consistent field level of theory. In the next level, additional functions are added to describe polarization of the electron density of the atom in molecules. These are called polarization functions. For example, while the minimal basis set for hydrogen is one function approximating the 1s atomic orbital, a simple polarized basis set typically has two s- and one p-function (which consists of three basis functions: px, py and pz). This adds flexibility to the basis set, effectively allowing molecular orbitals involving the hydrogen atom to be more asymmetric about the hydrogen nucleus. This is very important for modeling chemical bonding, because the bonds are often polarized. Similarly, d-type functions can be added to a basis set with valence p orbitals, and f-functions to a basis set with d-type orbitals, and so on.
Another common addition to basis sets is the addition of diffuse functions. These are extended Gaussian basis functions with a small exponent, which give flexibility to the "tail" portion of the atomic orbitals, far away from the nucleus. Diffuse basis functions are important for describing anions or dipole moments, but they can also be important for accurate modeling of intra- and inter-molecular bonding.
STO hierarchy
The most common minimal basis set is STO-nG, where n is an integer. The STO-nG basis sets are derived from a minimal Slater-type orbital basis set, with n representing the number of Gaussian primitive functions used to represent each Slater-type orbital. Minimal basis sets typically give rough results that are insufficient for research-quality publication, but are much cheaper than their larger counterparts. Commonly used minimal basis sets of this type are:
STO-3G
STO-4G
STO-6G
STO-3G* – Polarized version of STO-3G
There are several other minimum basis sets that have been used such as the MidiX basis sets.
Split-valence basis sets
During most molecular bonding, it is the valence electrons which principally take part in the bonding. In recognition of this fact, it is common to represent valence orbitals by more than one basis function (each of which can in turn be composed of a fixed linear combination of primitive Gaussian functions). Basis sets in which there are multiple basis functions corresponding to each valence atomic orbital are called valence double, triple, quadruple-zeta, and so on, basis sets (zeta, ζ, was commonly used to represent the exponent of an STO basis function). Since the different orbitals of the split have different spatial extents, the combination allows the electron density to adjust its spatial extent appropriate to the particular molecular environment. In contrast, minimal basis sets lack the flexibility to adjust to different molecular environments.
Pople basis sets
The notation for the split-valence basis sets arising from the group of John Pople is typically X-YZg. In this case, X represents the number of primitive Gaussians comprising each core atomic orbital basis function. The Y and Z indicate that the valence orbitals are composed of two basis functions each, the first one composed of a linear combination of Y primitive Gaussian functions, the other composed of a linear combination of Z primitive Gaussian functions. In this case, the presence of two numbers after the hyphens implies that this basis set is a split-valence double-zeta basis set. Split-valence triple- and quadruple-zeta basis sets are also used, denoted as X-YZWg, X-YZWVg, etc.
Polarization functions are denoted by two different notations. The original Pople notation added "*" to indicate that all "heavy" atoms (everything but H and He) have a small set of polarization functions added to the basis (in the case of carbon, a set of 3d orbital functions). The "**" notation indicates that all "light" atoms also receive polarization functions (this adds a set of 2p orbitals to the basis for each hydrogen atom). Eventually it became desirable to add more polarization to the basis sets, and a new notation was developed in which the number and types of polarization functions are given explicitly in parentheses in the order (heavy,light) but with the principal quantum numbers of the orbitals implicit. For example, the * notation becomes (d) and the ** notation is now given as (d,p). If instead 3d and 4f functions were added to each heavy atom and 2p, 3p, 3d functions were added to each light atom, the notation would become (df,2pd).
In all cases, diffuse functions are indicated by either adding a + before the letter G (diffuse functions on heavy atoms only) or ++ (diffuse functions are added to all atoms).
Here is a list of commonly used split-valence basis sets of this type:
3-21G
3-21G* – Polarization functions on heavy atoms
3-21G** – Polarization functions on heavy atoms and hydrogen
3-21+G – Diffuse functions on heavy atoms
3-21++G – Diffuse functions on heavy atoms and hydrogen
3-21+G* – Polarization and diffuse functions on heavy atoms only
3-21+G** – Polarization functions on heavy atoms and hydrogen, as well as diffuse functions on heavy atoms
4-21G
4-31G
6-21G
6-31G
6-31G*
6-31+G*
6-31G(3df,3pd) – 3 sets of d functions and 1 set of f functions on heavy atoms and 3 sets of p functions and 1 set of d functions on hydrogen
6-311G
6-311G*
6-311+G*
6-311+G(2df,2p)
In summary; the 6-31G* basis set (defined for the atoms H through Zn) is a split-valence double-zeta polarized basis set that adds to the 6-31G set five d-type Cartesian-Gaussian polarization functions on each of the atoms Li through Ca and ten f-type Cartesian Gaussian polarization functions on each of the atoms Sc through Zn.
The Pople basis sets were originally developed for use in Hartree-Fock calculations. Since then, correlation-consistent or polarization-consistent basis sets (see below) have been developed which are usually more appropriate for correlated wave function calculations. For Hartree–Fock or density functional theory, however, Pople basis sets are more efficient (per unit basis function) as compared to other alternatives, provided that the electronic structure program can take advantage of combined sp shells, and are still widely used for molecular structure determination of large molecules and as components of quantum chemistry composite methods.
Correlation-consistent basis sets
Some of the most widely used basis sets are those developed by Dunning and coworkers, since they are designed for converging post-Hartree–Fock calculations systematically to the complete basis set limit using empirical extrapolation techniques.
For first- and second-row atoms, the basis sets are cc-pVNZ where N = D,T,Q,5,6,... (D = double, T = triple, etc.). The 'cc-p', stands for 'correlation-consistent polarized' and the 'V' indicates that only basis sets for the valence orbitals are of multiple-zeta quality. (Like the Pople basis sets, the core orbitals are of single-zeta quality.) They include successively larger shells of polarization (correlating) functions (d, f, g, etc.). More recently these 'correlation-consistent polarized' basis sets have become widely used and are the current state of the art for correlated or post-Hartree–Fock calculations. The aug- prefix is added if diffuse functions are included in the basis. Examples of these are:
cc-pVDZ – Double-zeta
cc-pVTZ – Triple-zeta
cc-pVQZ – Quadruple-zeta
cc-pV5Z – Quintuple-zeta, etc.
aug-cc-pVDZ, etc. – Augmented versions of the preceding basis sets with added diffuse functions.
cc-pCVDZ – Double-zeta with core correlation
For period-3 atoms (Al–Ar), additional functions have turned out to be necessary; these are the cc-pV(N+d)Z basis sets. Even larger atoms may employ pseudopotential basis sets, cc-pVNZ-PP, or relativistic-contracted Douglas-Kroll basis sets, cc-pVNZ-DK.
While the usual Dunning basis sets are for valence-only calculations, the sets can be augmented with further functions that describe core electron correlation. These core-valence sets (cc-pCVXZ) can be used to approach the exact solution to the all-electron problem, and they are necessary for accurate geometric and nuclear property calculations.
Weighted core-valence sets (cc-pwCVXZ) have also been recently suggested. The weighted sets aim to capture core-valence correlation, while neglecting most of core-core correlation, in order to yield accurate geometries with smaller cost than the cc-pCVXZ sets.
Diffuse functions can also be added for describing anions and long-range interactions such as Van der Waals forces, or to perform electronic excited-state calculations, electric field property calculations. A recipe for constructing additional augmented functions exists; as many as five augmented functions have been used in second hyperpolarizability calculations in the literature. Because of the rigorous construction of these basis sets, extrapolation can be done for almost any energetic property. However, care must be taken when extrapolating energy differences as the individual energy components converge at different rates: the Hartree–Fock energy converges exponentially, whereas the correlation energy converges only polynomially.
To understand how to get the number of functions, consider the cc-pVDZ basis set for H:
There are two s (L = 0) orbitals and one p (L = 1) orbital that has 3 components along the z-axis (mL = −1,0,1) corresponding to px, py and pz. Thus, there are five spatial orbitals in total. Note that each orbital can hold two electrons of opposite spin.
As another example, Ar [1s, 2s, 2p, 3s, 3p] has 3 s orbitals (L = 0) and 2 sets of p orbitals (L = 1). Using cc-pVDZ, orbitals are [1s, 2s, 2p, 3s, 3s, 3p, 3p, 3d'] (where ' represents the added in polarisation orbitals), with 4 s orbitals (4 basis functions), 3 sets of p orbitals (3 × 3 = 9 basis functions), and 1 set of d orbitals (5 basis functions). Adding up the basis functions gives a total of 18 functions for Ar with the cc-pVDZ basis-set.
Polarization-consistent basis sets
Density-functional theory has recently become widely used in computational chemistry. However, the correlation-consistent basis sets described above are suboptimal for density-functional theory, because the correlation-consistent sets have been designed for post-Hartree–Fock, while density-functional theory exhibits much more rapid basis set convergence than wave function methods.
Adopting a similar methodology to the correlation-consistent series, Frank Jensen introduced polarization-consistent (pc-n) basis sets as a way to quickly converge density functional theory calculations to the complete basis set limit. Like the Dunning sets, the pc-n sets can be combined with basis set extrapolation techniques to obtain CBS values.
The pc-n sets can be augmented with diffuse functions to obtain augpc-n sets.
Karlsruhe basis sets
Some of the various valence adaptations of Karlsruhe basis sets are briefly described below.
def2-SV(P) – Split valence with polarization functions on heavy atoms (not hydrogen)
def2-SVP – Split valence polarization
def2-SVPD – Split valence polarization with diffuse functions
def2-TZVP – Valence triple-zeta polarization
def2-TZVPD – Valence triple-zeta polarization with diffuse functions
def2-TZVPP – Valence triple-zeta with two sets of polarization functions
def2-TZVPPD – Valence triple-zeta with two sets of polarization functions and a set of diffuse functions
def2-QZVP – Valence quadruple-zeta polarization
def2-QZVPD – Valence quadruple-zeta polarization with diffuse functions
def2-QZVPP – Valence quadruple-zeta with two sets of polarization functions
def2-QZVPPD – Valence quadruple-zeta with two sets of polarization functions and a set of diffuse functions
Completeness-optimized basis sets
Gaussian-type orbital basis sets are typically optimized to reproduce the lowest possible energy for the systems used to train the basis set. However, the convergence of the energy does not imply convergence of other properties, such as nuclear magnetic shieldings, the dipole moment, or the electron momentum density, which probe different aspects of the electronic wave function.
Manninen and Vaara have proposed completeness-optimized basis sets, where the exponents are obtained by maximization of the one-electron completeness profile instead of minimization of the energy. Completeness-optimized basis sets are a way to easily approach the complete basis set limit of any property at any level of theory, and the procedure is simple to automatize.
Completeness-optimized basis sets are tailored to a specific property. This way, the flexibility of the basis set can be focused on the computational demands of the chosen property, typically yielding much faster convergence to the complete basis set limit than is achievable with energy-optimized basis sets.
Even-tempered basis sets
In 1974 Bardo and Ruedenberg proposed a simple scheme to generate the exponents of a basis set that spans the Hilbert space evenly by following a geometric progression of the form:
for each angular momentum , where is the number of primitives functions. Here, only the two parameters and must be optimized, significantly reducing the dimension of the search space or even avoiding the exponent optimization problem. In order to properly describe electronic delocalized states, a previously optimized standard basis set can be complemented with additional delocalized Gaussian functions with small exponent values, generated by the even-tempered scheme. This approach has also been employed to generate basis sets for other types of quantum particles rather than electrons, like quantum nuclei, negative muons or positrons.
Plane-wave basis sets
In addition to localized basis sets, plane-wave basis sets can also be used in quantum-chemical simulations. Typically, the choice of the plane wave basis set is based on a cutoff energy. The plane waves in the simulation cell that fit below the energy criterion are then included in the calculation. These basis sets are popular in calculations involving three-dimensional periodic boundary conditions.
The main advantage of a plane-wave basis is that it is guaranteed to converge in a smooth, monotonic manner to the target wavefunction. In contrast, when localized basis sets are used, monotonic convergence to the basis set limit may be difficult due to problems with over-completeness: in a large basis set, functions on different atoms start to look alike, and many eigenvalues of the overlap matrix approach zero.
In addition, certain integrals and operations are much easier to program and carry out with plane-wave basis functions than with their localized counterparts. For example, the kinetic energy operator is diagonal in the reciprocal space. Integrals over real-space operators can be efficiently carried out using fast Fourier transforms. The properties of the Fourier Transform allow a vector representing the gradient of the total energy with respect to the plane-wave coefficients to be calculated with a computational effort that scales as NPW*ln(NPW) where NPW is the number of plane-waves. When this property is combined with separable pseudopotentials of the Kleinman-Bylander type and pre-conditioned conjugate gradient solution techniques, the dynamic simulation of periodic problems containing hundreds of atoms becomes possible.
In practice, plane-wave basis sets are often used in combination with an 'effective core potential' or pseudopotential, so that the plane waves are only used to describe the valence charge density. This is because core electrons tend to be concentrated very close to the atomic nuclei, resulting in large wavefunction and density gradients near the nuclei which are not easily described by a plane-wave basis set unless a very high energy cutoff, and therefore small wavelength, is used. This combined method of a plane-wave basis set with a core pseudopotential is often abbreviated as a PSPW calculation.
Furthermore, as all functions in the basis are mutually orthogonal and are not associated with any particular atom, plane-wave basis sets do not exhibit basis-set superposition error. However, the plane-wave basis set is dependent on the size of the simulation cell, complicating cell size optimization.
Due to the assumption of periodic boundary conditions, plane-wave basis sets are less well suited to gas-phase calculations than localized basis sets. Large regions of vacuum need to be added on all sides of the gas-phase molecule in order to avoid interactions with the molecule and its periodic copies. However, the plane waves use a similar accuracy to describe the vacuum region as the region where the molecule is, meaning that obtaining the truly noninteracting limit may be computationally costly.
Linearized augmented-plane-wave basis sets
A combination of some of the properties of localized basis sets and plane-wave approaches is achieved by linearized augmented-plane-wave (LAPW) basis sets. These are based on a partitioning of space into nonoverlapping spheres around each atom and an interstitial region in between the spheres. An LAPW basis function is a plane wave in the interstitial region, which is augmented by numerical atomic functions in each sphere. The numerical atomic functions hereby provide a linearized representation of wave functions for arbitrary energies around automatically determined energy parameters.
Similarly to plane-wave basis sets an LAPW basis set is mainly determined by a cutoff parameter for the plane-wave representation in the interstitial region. In the spheres the variational degrees of freedom can be extended by adding local orbitals to the basis set. This allows representations of wavefunctions beyond the linearized description.
The plane waves in the interstitial region imply three-dimensional periodic boundary conditions, though it is possible to introduce additional augmentation regions to reduce this to one or two dimensions, e.g., for the description of chain-like structures or thin films. The atomic-like representation in the spheres allows to treat each atom with its potential singularity at the nucleus and to not rely on a pseudopotential approximation.
The disadvantage of LAPW basis sets is its complex definition, which comes with many parameters that have to be controlled either by the user or an automatic recipe. Another consequence of the form of the basis set are complex mathematical expressions, e.g., for the calculation of a Hamiltonian matrix or atomic forces.
Real-space basis sets
Real-space approaches offer powerful methods to solve electronic structure problems thanks to their controllable accuracy. Real-space basis sets can be thought to arise from the theory of interpolation, as the central idea is to represent the (unknown) orbitals in terms of some set of interpolation functions.
Various methods have been proposed for constructing the solution in real space, including finite elements, basis splines, Lagrange sinc-functions, and wavelets. Finite difference algorithms are also often included in this category, even though precisely speaking, they do not form a proper basis set and are not variational unlike e.g. finite element methods.
A common feature of all real-space methods is that the accuracy of the numerical basis set is improvable, so that the complete basis set limit can be reached in a systematical manner.
Moreover, in the case of wavelets and finite elements, it is easy to use different levels of accuracy in different parts of the system, so that more points are used close to the nuclei where the wave function undergoes rapid changes and where most of the total energies lie, whereas a coarser representation is sufficient far away from nuclei; this feature is extremely important as it can be used to make all-electron calculations tractable.
For example, in finite element methods (FEMs), the wave function is represented as a linear combination of a set of piecewise polynomials. Lagrange interpolating polynomials (LIPs) are a commonly-used basis for FEM calculations. The local interpolation error in LIP basis of order is of the form . The complete basis set can thereby be reached either by going to smaller and smaller elements (i.e. dividing space in smaller and smaller subdivisions; -adaptive FEM), by switching to the use of higher and higher order polynomials (-adaptive FEM), or by a combination of both strategies (-adaptive FEM). The use of high-order LIPs has been shown to be highly beneficial for accuracy.
See also
Basis set superposition error
Angular momentum
Atomic orbitals
Molecular orbitals
List of quantum chemistry and solid state physics software
References
All the many basis sets discussed here along with others are discussed in the references below which themselves give references to the original journal articles:
https://web.archive.org/web/20070830043639/http://www.chem.swin.edu.au/modules/mod8/basis1.html
External links
EMSL Basis Set Exchange
TURBOMOLE basis set library
CRYSTAL – Basis Sets Library
Dyall Basis Sets Library
Peterson Group Correlation Consistent Basis Sets
Sapporo Segmented Gaussian Basis Sets Library
Stuttgart/Cologne energy-consistent (ab initio) pseudopotentials Library
ChemViz – Basis Sets Lab Activity
Quantum chemistry
Computational chemistry
Theoretical chemistry
pl:Baza funkcyjna | 0.778123 | 0.993049 | 0.772714 |
Chemical energy | Chemical energy is the energy of chemical substances that is released when the substances undergo a chemical reaction and transform into other substances. Some examples of storage media of chemical energy include batteries, food, and gasoline (as well as oxygen gas, which is of high chemical energy due to its relatively weak double bond and indispensable for chemical-energy release in gasoline combustion). Breaking and re-making chemical bonds involves energy, which may be either absorbed by or evolved from a chemical system. If reactants with relatively weak electron-pair bonds convert to more strongly bonded products, energy is released. Therefore, relatively weakly bonded and unstable molecules store chemical energy.
Energy that can be released or absorbed because of a reaction between chemical substances is equal to the difference between the energy content of the products and the reactants, if the initial and final temperature is the same. This change in energy can be estimated from the bond energies of the reactants and products. It can also be calculated from , the internal energy of formation of the reactant molecules, and , the internal energy of formation of the product molecules. The internal energy change of a chemical process is equal to the heat exchanged if it is measured under conditions of constant volume and equal initial and final temperature, as in a closed container such as a bomb calorimeter. However, under conditions of constant pressure, as in reactions in vessels open to the atmosphere, the measured heat change is not always equal to the internal energy change, because pressure-volume work also releases or absorbs energy. (The heat change at constant pressure is equal to the enthalpy change, in this case the enthalpy of reaction, if initial and final temperatures are equal).
A related term is the heat of combustion, which is the energy mostly of the weak double bonds of molecular oxygen released due to a combustion reaction and often applied in the study of fuels. Food is similar to hydrocarbon and carbohydrate fuels, and when it is oxidized to carbon dioxide and water, the energy released is analogous to the heat of combustion (though assessed differently than for a hydrocarbon fuel—see food energy).
Chemical potential energy is a form of potential energy related to the structural arrangement of atoms or molecules. This arrangement may be the result of chemical bonds within a molecule or interactions between them. Chemical energy of a chemical substance can be transformed to other forms of energy by a chemical reaction. For example, when a fuel is burned, the chemical energy of molecular oxygen and the fuel is converted to heat. Green plants transform solar energy to chemical energy (mostly of oxygen) through the process of photosynthesis, and electrical energy can be converted to chemical energy and vice versa through electrochemical reactions.
The similar term chemical potential is used to indicate the potential of a substance to undergo a change of configuration, be it in the form of a chemical reaction, spatial transport, particle exchange with a reservoir, etc. It is not a form of potential energy itself, but is more closely related to free energy. The confusion in terminology arises from the fact that in other areas of physics not dominated by entropy, all potential energy is available to do useful work and drives the system to spontaneously undergo changes of configuration, and thus there is no distinction between "free" and "non-free" potential energy (hence the one word "potential"). However, in systems of large entropy such as chemical systems, the total amount of energy present (and conserved according to the first law of thermodynamics) of which this chemical potential energy is a part, is separated from the amount of that energy—thermodynamic free energy (from which chemical potential is derived)—which (appears to) drive the system forward spontaneously as the global entropy increases (in accordance with the second law).
References | 0.776698 | 0.994832 | 0.772684 |
Hydrogenation | Hydrogenation is a chemical reaction between molecular hydrogen (H2) and another compound or element, usually in the presence of a catalyst such as nickel, palladium or platinum. The process is commonly employed to reduce or saturate organic compounds. Hydrogenation typically constitutes the addition of pairs of hydrogen atoms to a molecule, often an alkene. Catalysts are required for the reaction to be usable; non-catalytic hydrogenation takes place only at very high temperatures. Hydrogenation reduces double and triple bonds in hydrocarbons.
Process
Hydrogenation has three components, the unsaturated substrate, the hydrogen (or hydrogen source) and, invariably, a catalyst. The reduction reaction is carried out at different temperatures and pressures depending upon the substrate and the activity of the catalyst.
Related or competing reactions
The same catalysts and conditions that are used for hydrogenation reactions can also lead to isomerization of the alkenes from cis to trans. This process is of great interest because hydrogenation technology generates most of the trans fat in foods. A reaction where bonds are broken while hydrogen is added is called hydrogenolysis, a reaction that may occur to carbon-carbon and carbon-heteroatom (oxygen, nitrogen or halogen) bonds. Some hydrogenations of polar bonds are accompanied by hydrogenolysis.
Hydrogen sources
For hydrogenation, the obvious source of hydrogen is gas itself, which is typically available commercially within the storage medium of a pressurized cylinder. The hydrogenation process often uses greater than 1 atmosphere of , usually conveyed from the cylinders and sometimes augmented by "booster pumps". Gaseous hydrogen is produced industrially from hydrocarbons by the process known as steam reforming. For many applications, hydrogen is transferred from donor molecules such as formic acid, isopropanol, and dihydroanthracene. These hydrogen donors undergo dehydrogenation to, respectively, carbon dioxide, acetone, and anthracene. These processes are called transfer hydrogenations.
Substrates
An important characteristic of alkene and alkyne hydrogenations, both the homogeneously and heterogeneously catalyzed versions, is that hydrogen addition occurs with "syn addition", with hydrogen entering from the least hindered side. This reaction can be performed on a variety of different functional groups.
Catalysts
With rare exceptions, is unreactive toward organic compounds in the absence of metal catalysts. The unsaturated substrate is chemisorbed onto the catalyst, with most sites covered by the substrate. In heterogeneous catalysts, hydrogen forms surface hydrides (M-H) from which hydrogens can be transferred to the chemisorbed substrate. Platinum, palladium, rhodium, and ruthenium form highly active catalysts, which operate at lower temperatures and lower pressures of . Non-precious metal catalysts, especially those based on nickel (such as Raney nickel and Urushibara nickel) have also been developed as economical alternatives, but they are often slower or require higher temperatures. The trade-off is activity (speed of reaction) vs. cost of the catalyst and cost of the apparatus required for use of high pressures. Notice that the Raney-nickel catalysed hydrogenations require high pressures:
Catalysts are usually classified into two broad classes: homogeneous and heterogeneous. Homogeneous catalysts dissolve in the solvent that contains the unsaturated substrate. Heterogeneous catalysts are solids that are suspended in the same solvent with the substrate or are treated with gaseous substrate.
Homogeneous catalysts
Some well known homogeneous catalysts are indicated below. These are coordination complexes that activate both the unsaturated substrate and the . Most typically, these complexes contain platinum group metals, especially Rh and Ir.
Homogeneous catalysts are also used in asymmetric synthesis by the hydrogenation of prochiral substrates. An early demonstration of this approach was the Rh-catalyzed hydrogenation of enamides as precursors to the drug . To achieve asymmetric reduction, these catalyst are made chiral by use of chiral diphosphine ligands. Rhodium catalyzed hydrogenation has also been used in the herbicide production of S-metolachlor, which uses a Josiphos type ligand (called Xyliphos). In principle asymmetric hydrogenation can be catalyzed by chiral heterogeneous catalysts, but this approach remains more of a curiosity than a useful technology.
Heterogeneous catalysts
Heterogeneous catalysts for hydrogenation are more common industrially. In industry, precious metal hydrogenation catalysts are deposited from solution as a fine powder on the support, which is a cheap, bulky, porous, usually granular material, such as activated carbon, alumina, calcium carbonate or barium sulfate. For example, platinum on carbon is produced by reduction of chloroplatinic acid in situ in carbon. Examples of these catalysts are 5% ruthenium on activated carbon, or 1% platinum on alumina. Base metal catalysts, such as Raney nickel, are typically much cheaper and do not need a support. Also, in the laboratory, unsupported (massive) precious metal catalysts such as platinum black are still used, despite the cost.
As in homogeneous catalysts, the activity is adjusted through changes in the environment around the metal, i.e. the coordination sphere. Different faces of a crystalline heterogeneous catalyst display distinct activities, for example. This can be modified by mixing metals or using different preparation techniques. Similarly, heterogeneous catalysts are affected by their supports.
In many cases, highly empirical modifications involve selective "poisons". Thus, a carefully chosen catalyst can be used to hydrogenate some functional groups without affecting others, such as the hydrogenation of alkenes without touching aromatic rings, or the selective hydrogenation of alkynes to alkenes using Lindlar's catalyst. For example, when the catalyst palladium is placed on barium sulfate and then treated with quinoline, the resulting catalyst reduces alkynes only as far as alkenes. The Lindlar catalyst has been applied to the conversion of phenylacetylene to styrene.
Transfer hydrogenation
Transfer hydrogenation uses hydrogen-donor molecules other than molecular . These "sacrificial" hydrogen donors, which can also serve as solvents for the reaction, include hydrazine, formic acid, and alcohols such as isopropanol.
In organic synthesis, transfer hydrogenation is useful for the asymmetric hydrogenation of polar unsaturated substrates, such as ketones, aldehydes and imines, by employing chiral catalysts.
Electrolytic hydrogenation
Polar substrates such as nitriles can be hydrogenated electrochemically, using protic solvents and reducing equivalents as the source of hydrogen.
Thermodynamics and mechanism
The addition of hydrogen to double or triple bonds in hydrocarbons is a type of redox reaction that can be thermodynamically favorable. For example, the addition of hydrogen to ethene has a Gibbs free energy change of -101 kJ·mol−1, which is highly exothermic. In the hydrogenation of vegetable oils and fatty acids, for example, the heat released, about 25 kcal per mole (105 kJ/mol), is sufficient to raise the temperature of the oil by 1.6–1.7 °C per iodine number drop.
However, the reaction rate for most hydrogenation reactions is negligible in the absence of catalysts. The mechanism of metal-catalyzed hydrogenation of alkenes and alkynes has been extensively studied. First of all isotope labeling using deuterium confirms the regiochemistry of the addition:
RCH=CH2 + D2 -> RCHDCH2D
Heterogeneous catalysis
On solids, the accepted mechanism is the Horiuti-Polanyi mechanism:
Binding of the unsaturated bond
Dissociation of on the catalyst
Addition of one atom of hydrogen; this step is reversible
Addition of the second atom; effectively irreversible.
In the third step, the alkyl group can revert to alkene, which can detach from the catalyst. Consequently, contact with a hydrogenation catalyst allows cis-trans-isomerization. The trans-alkene can reassociate to the surface and undergo hydrogenation. These details are revealed in part using D2 (deuterium), because recovered alkenes often contain deuterium.
For aromatic substrates, the first hydrogenation is slowest. The product of this step is a cyclohexadiene, which hydrogenate rapidly and are rarely detected. Similarly, the cyclohexene is ordinarily reduced to cyclohexane.
Homogeneous catalysis
In many homogeneous hydrogenation processes, the metal binds to both components to give an intermediate alkene-metal(H)2 complex. The general sequence of reactions is assumed to be as follows or a related sequence of steps:
binding of the hydrogen to give a dihydride complex via oxidative addition (preceding the oxidative addition of is the formation of a dihydrogen complex):
binding of alkene:
transfer of one hydrogen atom from the metal to carbon (migratory insertion):
transfer of the second hydrogen atom from the metal to the alkyl group with simultaneous dissociation of the alkane ("reductive elimination")
Alkene isomerization often accompanies hydrogenation. This important side reaction proceeds by beta-hydride elimination of the alkyl hydride intermediate:
Often the released olefin is trans.
Inorganic substrates
The hydrogenation of nitrogen to give ammonia is conducted on a vast scale by the Haber–Bosch process, consuming an estimated 1% of the world's energy supply.
\underset{nitrogen}{N{\equiv}N} + \underset{hydrogen\atop (200 atm)}{3H2} ->[\ce{Fe\ catalyst}][350-550^\circ\ce C] \underset{ammonia}{2NH3}
Oxygen can be partially hydrogenated to give hydrogen peroxide, although this process has not been commercialized. One difficulty is preventing the catalysts from triggering decomposition of the hydrogen peroxide to form water.
Industrial applications
Catalytic hydrogenation has diverse industrial uses. Most frequently, industrial hydrogenation relies on heterogeneous catalysts.
Food industry
The food industry hydrogenates vegetable oils to convert them into solid or semi-solid fats that can be used in spreads, candies, baked goods, and other products like margarine. Vegetable oils are made from polyunsaturated fatty acids (having more than one carbon-carbon double bond). Hydrogenation eliminates some of these double bonds.
Petrochemical industry
In petrochemical processes, hydrogenation is used to convert alkenes and aromatics into saturated alkanes (paraffins) and cycloalkanes (naphthenes), which are less toxic and less reactive. Relevant to liquid fuels that are stored sometimes for long periods in air, saturated hydrocarbons exhibit superior storage properties. On the other hand, alkenes tend to form hydroperoxides, which can form gums that interfere with fuel handling equipment. For example, mineral turpentine is usually hydrogenated. Hydrocracking of heavy residues into diesel is another application. In isomerization and catalytic reforming processes, some hydrogen pressure is maintained to hydrogenolyze coke formed on the catalyst and prevent its accumulation.
Organic chemistry
Hydrogenation is a useful means for converting unsaturated compounds into saturated derivatives. Substrates include not only alkenes and alkynes, but also aldehydes, imines, and nitriles, which are converted into the corresponding saturated compounds, i.e. alcohols and amines. Thus, alkyl aldehydes, which can be synthesized with the oxo process from carbon monoxide and an alkene, can be converted to alcohols. E.g. 1-propanol is produced from propionaldehyde, produced from ethene and carbon monoxide. Xylitol, a polyol, is produced by hydrogenation of the sugar xylose, an aldehyde. Primary amines can be synthesized by hydrogenation of nitriles, while nitriles are readily synthesized from cyanide and a suitable electrophile. For example, isophorone diamine, a precursor to the polyurethane monomer isophorone diisocyanate, is produced from isophorone nitrile by a tandem nitrile hydrogenation/reductive amination by ammonia, wherein hydrogenation converts both the nitrile into an amine and the imine formed from the aldehyde and ammonia into another amine.
Hydrogenation of coal
History
Heterogeneous catalytic hydrogenation
The earliest hydrogenation was that of the platinum-catalyzed addition of hydrogen to oxygen in the Döbereiner's lamp, a device commercialized as early as 1823. The French chemist Paul Sabatier is considered the father of the hydrogenation process. In 1897, building on the earlier work of James Boyce, an American chemist working in the manufacture of soap products, he discovered that traces of nickel catalyzed the addition of hydrogen to molecules of gaseous hydrocarbons in what is now known as the Sabatier process. For this work, Sabatier shared the 1912 Nobel Prize in Chemistry. Wilhelm Normann was awarded a patent in Germany in 1902 and in Britain in 1903 for the hydrogenation of liquid oils, which was the beginning of what is now a worldwide industry. The commercially important Haber–Bosch process, first described in 1905, involves hydrogenation of nitrogen. In the Fischer–Tropsch process, reported in 1922 carbon monoxide, which is easily derived from coal, is hydrogenated to liquid fuels.
In 1922, Voorhees and Adams described an apparatus for performing hydrogenation under pressures above one atmosphere. The Parr shaker, the first product to allow hydrogenation using elevated pressures and temperatures, was commercialized in 1926 based on Voorhees and Adams' research and remains in widespread use. In 1924 Murray Raney developed a finely powdered form of nickel, which is widely used to catalyze hydrogenation reactions such as conversion of nitriles to amines or the production of margarine.
Homogeneous catalytic hydrogenation
In the 1930s, Calvin discovered that copper(II) complexes oxidized H2. The 1960s witnessed the development of well defined homogeneous catalysts using transition metal complexes, e.g., Wilkinson's catalyst (RhCl(PPh3)3). Soon thereafter cationic Rh and Ir were found to catalyze the hydrogenation of alkenes and carbonyls. In the 1970s, asymmetric hydrogenation was demonstrated in the synthesis of , and the 1990s saw the invention of Noyori asymmetric hydrogenation. The development of homogeneous hydrogenation was influenced by work started in the 1930s and 1940s on the oxo process and Ziegler–Natta polymerization.
Metal-free hydrogenation
For most practical purposes, hydrogenation requires a metal catalyst. Hydrogenation can, however, proceed from some hydrogen donors without catalysts, illustrative hydrogen donors being diimide and aluminium isopropoxide, the latter illustrated by the Meerwein–Ponndorf–Verley reduction. Some metal-free catalytic systems have been investigated in academic research. One such system for reduction of ketones consists of tert-butanol and potassium tert-butoxide and very high temperatures. The reaction depicted below describes the hydrogenation of benzophenone:
A chemical kinetics study found this reaction is first-order in all three reactants suggesting a cyclic 6-membered transition state.
Another system for metal-free hydrogenation is based on the phosphine-borane, compound 1, which has been called a frustrated Lewis pair. It reversibly accepts dihydrogen at relatively low temperatures to form the phosphonium borate 2 which can reduce simple hindered imines.
The reduction of nitrobenzene to aniline has been reported to be catalysed by fullerene, its mono-anion, atmospheric hydrogen and UV light.
Equipment used for hydrogenation
Today's bench chemist has three main choices of hydrogenation equipment:
Batch hydrogenation under atmospheric conditions
Batch hydrogenation at elevated temperature and/or pressure
Flow hydrogenation
Batch hydrogenation under atmospheric conditions
The original and still a commonly practised form of hydrogenation in teaching laboratories, this process is usually effected by adding solid catalyst to a round bottom flask of dissolved reactant which has been evacuated using nitrogen or argon gas and sealing the mixture with a penetrable rubber seal. Hydrogen gas is then supplied from a H2-filled balloon. The resulting three phase mixture is agitated to promote mixing. Hydrogen uptake can be monitored, which can be useful for monitoring progress of a hydrogenation. This is achieved by either using a graduated tube containing a coloured liquid, usually aqueous copper sulfate or with gauges for each reaction vessel.
Batch hydrogenation at elevated temperature and/or pressure
Since many hydrogenation reactions such as hydrogenolysis of protecting groups and the reduction of aromatic systems proceed extremely sluggishly at atmospheric temperature and pressure, pressurised systems are popular. In these cases, catalyst is added to a solution of reactant under an inert atmosphere in a pressure vessel. Hydrogen is added directly from a cylinder or built in laboratory hydrogen source, and the pressurized slurry is mechanically rocked to provide agitation, or a spinning basket is used. Recent advances in electrolysis technology have led to the development of high pressure hydrogen generators, which generate hydrogen up to 1,400 psi (100 bar) from water. Heat may also be used, as the pressure compensates for the associated reduction in gas solubility.
Flow hydrogenation
Flow hydrogenation has become a popular technique at the bench and increasingly the process scale. This technique involves continuously flowing a dilute stream of dissolved reactant over a fixed bed catalyst in the presence of hydrogen. Using established high-performance liquid chromatography technology, this technique allows the application of pressures from atmospheric to . Elevated temperatures may also be used. At the bench scale, systems use a range of pre-packed catalysts which eliminates the need for weighing and filtering pyrophoric catalysts.
Industrial reactors
Catalytic hydrogenation is done in a tubular plug-flow reactor packed with a supported catalyst. The pressures and temperatures are typically high, although this depends on the catalyst. Catalyst loading is typically much lower than in laboratory batch hydrogenation, and various promoters are added to the metal, or mixed metals are used, to improve activity, selectivity and catalyst stability. The use of nickel is common despite its low activity, due to its low cost compared to precious metals.
Gas liquid induction reactors (hydrogenator) are also used for carrying out catalytic hydrogenation.
See also
Carbon neutral fuel
Dehydrogenation
H-Bio
Hydrodesulfurization, hydrotreater and oil desulfurization
Hydrogenation of carbon–nitrogen double bonds
Josiphos ligands
Timeline of hydrogen technologies
Transfer hydrogenation
References
Further reading
examples of hydrogenation from Organic Syntheses:
Organic Syntheses, Coll. Vol. 7, p.226 (1990).
Organic Syntheses, Coll. Vol. 8, p.609 (1993).
Organic Syntheses, Coll. Vol. 5, p.552 (1973).
Organic Syntheses, Coll. Vol. 3, p.720 (1955).
Organic Syntheses, Coll. Vol. 6, p.371 (1988).
early work on transfer hydrogenation:
External links
"The Magic of Hydro", Popular Mechanics, June 1931, pp. 107–109 – early article for the general public on hydrogenation of oil produced in the 1930s
Addition reactions
Homogeneous catalysis
Industrial processes
Hydrogen
Organic redox reactions
Oil refining
Oil shale technology
Synthetic fuel technologies | 0.775792 | 0.995967 | 0.772663 |
Electrostatics | Electrostatics is a branch of physics that studies slow-moving or stationary electric charges.
Since classical times, it has been known that some materials, such as amber, attract lightweight particles after rubbing. The Greek word for amber, , was thus the source of the word electricity. Electrostatic phenomena arise from the forces that electric charges exert on each other. Such forces are described by Coulomb's law.
There are many examples of electrostatic phenomena, from those as simple as the attraction of plastic wrap to one's hand after it is removed from a package, to the apparently spontaneous explosion of grain silos, the damage of electronic components during manufacturing, and photocopier and laser printer operation.
The electrostatic model accurately predicts electrical phenomena in "classical" cases where the velocities are low and the system is macroscopic so no quantum effects are involved. It also plays a role in quantum mechanics, where additional terms also need to be included.
Coulomb's law
Coulomb's law states that:
The force is along the straight line joining them. If the two charges have the same sign, the electrostatic force between them is repulsive; if they have different signs, the force between them is attractive.
If is the distance (in meters) between two charges, then the force between two point charges and is:
where ε0 = is the vacuum permittivity.
The SI unit of ε0 is equivalently A2⋅s4 ⋅kg−1⋅m−3 or C2⋅N−1⋅m−2 or F⋅m−1.
Electric field
The electric field, , in units of Newtons per Coulomb or volts per meter, is a vector field that can be defined everywhere, except at the location of point charges (where it diverges to infinity). It is defined as the electrostatic force on a hypothetical small test charge at the point due to Coulomb's law, divided by the charge
Electric field lines are useful for visualizing the electric field. Field lines begin on positive charge and terminate on negative charge. They are parallel to the direction of the electric field at each point, and the density of these field lines is a measure of the magnitude of the electric field at any given point.
A collection of particles of charge , located at points (called source points) generates the electric field at (called the field point) of:
where is the displacement vector from a source point to the field point , and is a unit vector that indicates the direction of the field. For a single point charge, , at the origin, the magnitude of this electric field is
and points away from that charge if it is positive. The fact that the force (and hence the field) can be calculated by summing over all the contributions due to individual source particles is an example of the superposition principle. The electric field produced by a distribution of charges is given by the volume charge density and can be obtained by converting this sum into a triple integral:
Gauss's law
Gauss's law states that "the total electric flux through any closed surface in free space of any shape drawn in an electric field is proportional to the total electric charge enclosed by the surface." Many numerical problems can be solved by considering a Gaussian surface around a body. Mathematically, Gauss's law takes the form of an integral equation:
where is a volume element. If the charge is distributed over a surface or along a line, replace by or . The divergence theorem allows Gauss's Law to be written in differential form:
where is the divergence operator.
Poisson and Laplace equations
The definition of electrostatic potential, combined with the differential form of Gauss's law (above), provides a relationship between the potential Φ and the charge density ρ:
This relationship is a form of Poisson's equation. In the absence of unpaired electric charge, the equation becomes Laplace's equation:
Electrostatic approximation
The validity of the electrostatic approximation rests on the assumption that the electric field is irrotational:
From Faraday's law, this assumption implies the absence or near-absence of time-varying magnetic fields:
In other words, electrostatics does not require the absence of magnetic fields or electric currents. Rather, if magnetic fields or electric currents do exist, they must not change with time, or in the worst-case, they must change with time only very slowly. In some problems, both electrostatics and magnetostatics may be required for accurate predictions, but the coupling between the two can still be ignored. Electrostatics and magnetostatics can both be seen as non-relativistic Galilean limits for electromagnetism. In addition, conventional electrostatics ignore quantum effects which have to be added for a complete description.
Electrostatic potential
As the electric field is irrotational, it is possible to express the electric field as the gradient of a scalar function, , called the electrostatic potential (also known as the voltage). An electric field, , points from regions of high electric potential to regions of low electric potential, expressed mathematically as
The gradient theorem can be used to establish that the electrostatic potential is the amount of work per unit charge required to move a charge from point to point with the following line integral:
From these equations, we see that the electric potential is constant in any region for which the electric field vanishes (such as occurs inside a conducting object).
Electrostatic energy
A test particle's potential energy, , can be calculated from a line integral of the work, . We integrate from a point at infinity, and assume a collection of particles of charge , are already situated at the points . This potential energy (in Joules) is:
where is the distance of each charge from the test charge , which situated at the point , and is the electric potential that would be at if the test charge were not present. If only two charges are present, the potential energy is . The total electric potential energy due a collection of N charges is calculating by assembling these particles one at a time:
where the following sum from, j = 1 to N, excludes i = j:
This electric potential, is what would be measured at if the charge were missing. This formula obviously excludes the (infinite) energy that would be required to assemble each point charge from a disperse cloud of charge. The sum over charges can be converted into an integral over charge density using the prescription :
This second expression for electrostatic energy uses the fact that the electric field is the negative gradient of the electric potential, as well as vector calculus identities in a way that resembles integration by parts. These two integrals for electric field energy seem to indicate two mutually exclusive formulas for electrostatic energy density, namely and ; they yield equal values for the total electrostatic energy only if both are integrated over all space.
Electrostatic pressure
On a conductor, a surface charge will experience a force in the presence of an electric field. This force is the average of the discontinuous electric field at the surface charge. This average in terms of the field just outside the surface amounts to:
This pressure tends to draw the conductor into the field, regardless of the sign of the surface charge.
See also
Electrostatic generator, machines that create static electricity.
Electrostatic induction, separation of charges due to electric fields.
Permittivity and relative permittivity, the electric polarizability of materials.
Quantisation of charge, the charge units carried by electrons or protons.
Static electricity, stationary charge accumulated on a material.
Triboelectric effect, separation of charges due to sliding or contact.
References
Further reading
External links
The Feynman Lectures on Physics Vol. II Ch. 4: Electrostatics
Introduction to Electrostatics: Point charges can be treated as a distribution using the Dirac delta function | 0.77455 | 0.997544 | 0.772647 |
Basic research | Basic research, also called pure research, fundamental research, basic science, or pure science, is a type of scientific research with the aim of improving scientific theories for better understanding and prediction of natural or other phenomena. In contrast, applied research uses scientific theories to develop technology or techniques, which can be used to intervene and alter natural or other phenomena. Though often driven simply by curiosity, basic research often fuels the technological innovations of applied science. The two aims are often practiced simultaneously in coordinated research and development.
In addition to innovations, basic research also serves to provide insight into nature around us and allows us to respect its innate value. The development of this respect is what drives conservation efforts. Through learning about the environment, conservation efforts can be strengthened using research as a basis. Technological innovations can unintentionally be created through this as well, as seen with examples such as kingfishers' beaks affecting the design for high speed bullet trains in Japan.
Overview
Basic research advances fundamental knowledge about the world. It focuses on creating and refuting or supporting theories that explain observed phenomena. Pure research is the source of most new scientific ideas and ways of thinking about the world. It can be exploratory, descriptive, or explanatory; however, explanatory research is the most common.
Basic research generates new ideas, principles, and theories, which may not be immediately utilized but nonetheless form the basis of progress and development in different fields. Today's computers, for example, could not exist without research in pure mathematics conducted over a century ago, for which there was no known practical application at the time. Basic research rarely helps practitioners directly with their everyday concerns; nevertheless, it stimulates new ways of thinking that have the potential to revolutionize and dramatically improve how practitioners deal with a problem in the future.
History
By country
In the United States, basic research is funded mainly by the federal government and done mainly at universities and institutes. As government funding has diminished in the 2010s, however, private funding is increasingly important.
Basic versus applied science
Applied science focuses on the development of technology and techniques. In contrast, basic science develops scientific knowledge and predictions, principally in natural sciences but also in other empirical sciences, which are used as the scientific foundation for applied science. Basic science develops and establishes information to predict phenomena and perhaps to understand nature, whereas applied science uses portions of basic science to develop interventions via technology or technique to alter events or outcomes. Applied and basic sciences can interface closely in research and development. The interface between basic research and applied research has been studied by the National Science Foundation. A worker in basic scientific research is motivated by a driving curiosity about the unknown. When his explorations yield new knowledge, he experiences the satisfaction of those who first attain the summit of a mountain or the upper reaches of a river flowing through unmapped territory. Discovery of truth and understanding of nature are his objectives. His professional standing among his fellows depends upon the originality and soundness of his work. Creativeness in science is of a cloth with that of the poet or painter.It conducted a study in which it traced the relationship between basic scientific research efforts and the development of major innovations, such as oral contraceptives and videotape recorders. This study found that basic research played a key role in the development in all of the innovations. The number of basic science research that assisted in the production of a given innovation peaked between 20 and 30 years before the innovation itself. While most innovation takes the form of applied science and most innovation occurs in the private sector, basic research is a necessary precursor to almost all applied science and associated instances of innovation. Roughly 76% of basic research is conducted by universities.
A distinction can be made between basic science and disciplines such as medicine and technology. They can be grouped as STM (science, technology, and medicine; not to be confused with STEM [science, technology, engineering, and mathematics]) or STS (science, technology, and society). These groups are interrelated and influence each other, although they may differ in the specifics such as methods and standards.
The Nobel Prize mixes basic with applied sciences for its award in Physiology or Medicine. In contrast, the Royal Society of London awards distinguish natural science from applied science.
See also
Blue skies research
Hard and soft science
Metascience
Normative science
Physics
Precautionary principle
Pure mathematics
Pure Chemistry
References
Further reading
Research | 0.776 | 0.995632 | 0.772611 |
Renormalization | Renormalization is a collection of techniques in quantum field theory, statistical field theory, and the theory of self-similar geometric structures, that are used to treat infinities arising in calculated quantities by altering values of these quantities to compensate for effects of their self-interactions. But even if no infinities arose in loop diagrams in quantum field theory, it could be shown that it would be necessary to renormalize the mass and fields appearing in the original Lagrangian.
For example, an electron theory may begin by postulating an electron with an initial mass and charge. In quantum field theory a cloud of virtual particles, such as photons, positrons, and others surrounds and interacts with the initial electron. Accounting for the interactions of the surrounding particles (e.g. collisions at different energies) shows that the electron-system behaves as if it had a different mass and charge than initially postulated. Renormalization, in this example, mathematically replaces the initially postulated mass and charge of an electron with the experimentally observed mass and charge. Mathematics and experiments prove that positrons and more massive particles such as protons exhibit precisely the same observed charge as the electron – even in the presence of much stronger interactions and more intense clouds of virtual particles.
Renormalization specifies relationships between parameters in the theory when parameters describing large distance scales differ from parameters describing small distance scales. Physically, the pileup of contributions from an infinity of scales involved in a problem may then result in further infinities. When describing spacetime as a continuum, certain statistical and quantum mechanical constructions are not well-defined. To define them, or make them unambiguous, a continuum limit must carefully remove "construction scaffolding" of lattices at various scales. Renormalization procedures are based on the requirement that certain physical quantities (such as the mass and charge of an electron) equal observed (experimental) values. That is, the experimental value of the physical quantity yields practical applications, but due to their empirical nature the observed measurement represents areas of quantum field theory that require deeper derivation from theoretical bases.
Renormalization was first developed in quantum electrodynamics (QED) to make sense of infinite integrals in perturbation theory. Initially viewed as a suspect provisional procedure even by some of its originators, renormalization eventually was embraced as an important and self-consistent actual mechanism of scale physics in several fields of physics and mathematics. Despite his later skepticism, it was Paul Dirac who pioneered renormalization.
Today, the point of view has shifted: on the basis of the breakthrough renormalization group insights of Nikolay Bogolyubov and Kenneth Wilson, the focus is on variation of physical quantities across contiguous scales, while distant scales are related to each other through "effective" descriptions. All scales are linked in a broadly systematic way, and the actual physics pertinent to each is extracted with the suitable specific computational techniques appropriate for each. Wilson clarified which variables of a system are crucial and which are redundant.
Renormalization is distinct from regularization, another technique to control infinities by assuming the existence of new unknown physics at new scales.
Self-interactions in classical physics
The problem of infinities first arose in the classical electrodynamics of point particles in the 19th and early 20th century.
The mass of a charged particle should include the mass–energy in its electrostatic field (electromagnetic mass). Assume that the particle is a charged spherical shell of radius . The mass–energy in the field is
which becomes infinite as . This implies that the point particle would have infinite inertia and thus cannot be accelerated. Incidentally, the value of that makes equal to the electron mass is called the classical electron radius, which (setting and restoring factors of and ) turns out to be
where is the fine-structure constant, and is the reduced Compton wavelength of the electron.
Renormalization: The total effective mass of a spherical charged particle includes the actual bare mass of the spherical shell (in addition to the mass mentioned above associated with its electric field). If the shell's bare mass is allowed to be negative, it might be possible to take a consistent point limit. This was called renormalization, and Lorentz and Abraham attempted to develop a classical theory of the electron this way. This early work was the inspiration for later attempts at regularization and renormalization in quantum field theory.
(See also regularization (physics) for an alternative way to remove infinities from this classical problem, assuming new physics exists at small scales.)
When calculating the electromagnetic interactions of charged particles, it is tempting to ignore the back-reaction of a particle's own field on itself. (Analogous to the back-EMF of circuit analysis.) But this back-reaction is necessary to explain the friction on charged particles when they emit radiation. If the electron is assumed to be a point, the value of the back-reaction diverges, for the same reason that the mass diverges, because the field is inverse-square.
The Abraham–Lorentz theory had a noncausal "pre-acceleration". Sometimes an electron would start moving before the force is applied. This is a sign that the point limit is inconsistent.
The trouble was worse in classical field theory than in quantum field theory, because in quantum field theory a charged particle experiences Zitterbewegung due to interference with virtual particle–antiparticle pairs, thus effectively smearing out the charge over a region comparable to the Compton wavelength. In quantum electrodynamics at small coupling, the electromagnetic mass only diverges as the logarithm of the radius of the particle.
Divergences in quantum electrodynamics
When developing quantum electrodynamics in the 1930s, Max Born, Werner Heisenberg, Pascual Jordan, and Paul Dirac discovered that in perturbative corrections many integrals were divergent (see The problem of infinities).
One way of describing the perturbation theory corrections' divergences was discovered in 1947–49 by Hans Kramers, Hans Bethe,
Julian Schwinger, Richard Feynman, and Shin'ichiro Tomonaga, and systematized by Freeman Dyson in 1949. The divergences appear in radiative corrections involving Feynman diagrams with closed loops of virtual particles in them.
While virtual particles obey conservation of energy and momentum, they can have any energy and momentum, even one that is not allowed by the relativistic energy–momentum relation for the observed mass of that particle (that is, is not necessarily the squared mass of the particle in that process, e.g. for a photon it could be nonzero). Such a particle is called off-shell. When there is a loop, the momentum of the particles involved in the loop is not uniquely determined by the energies and momenta of incoming and outgoing particles. A variation in the energy of one particle in the loop can be balanced by an equal and opposite change in the energy of another particle in the loop, without affecting the incoming and outgoing particles. Thus many variations are possible. So to find the amplitude for the loop process, one must integrate over all possible combinations of energy and momentum that could travel around the loop.
These integrals are often divergent, that is, they give infinite answers. The divergences that are significant are the "ultraviolet" (UV) ones. An ultraviolet divergence can be described as one that comes from
the region in the integral where all particles in the loop have large energies and momenta,
very short wavelengths and high-frequencies fluctuations of the fields, in the path integral for the field,
very short proper-time between particle emission and absorption, if the loop is thought of as a sum over particle paths.
So these divergences are short-distance, short-time phenomena.
Shown in the pictures at the right margin, there are exactly three one-loop divergent loop diagrams in quantum electrodynamics:
(a) A photon creates a virtual electron–positron pair, which then annihilates. This is a vacuum polarization diagram.
(b) An electron quickly emits and reabsorbs a virtual photon, called a self-energy.
(c) An electron emits a photon, emits a second photon, and reabsorbs the first. This process is shown in the section below in figure 2, and it is called a vertex renormalization. The Feynman diagram for this is also called a “penguin diagram” due to its shape remotely resembling a penguin.
The three divergences correspond to the three parameters in the theory under consideration:
The field normalization Z.
The mass of the electron.
The charge of the electron.
The second class of divergence called an infrared divergence, is due to massless particles, like the photon. Every process involving charged particles emits infinitely many coherent photons of infinite wavelength, and the amplitude for emitting any finite number of photons is zero. For photons, these divergences are well understood. For example, at the 1-loop order, the vertex function has both ultraviolet and infrared divergences. In contrast to the ultraviolet divergence, the infrared divergence does not require the renormalization of a parameter in the theory involved. The infrared divergence of the vertex diagram is removed by including a diagram similar to the vertex diagram with the following important difference: the photon connecting the two legs of the electron is cut and replaced by two on-shell (i.e. real) photons whose wavelengths tend to infinity; this diagram is equivalent to the bremsstrahlung process. This additional diagram must be included because there is no physical way to distinguish a zero-energy photon flowing through a loop as in the vertex diagram and zero-energy photons emitted through bremsstrahlung. From a mathematical point of view, the IR divergences can be regularized by assuming fractional differentiation w.r.t. a parameter, for example:
is well defined at but is UV divergent; if we take the -th fractional derivative with respect to , we obtain the IR divergence
so we can cure IR divergences by turning them into UV divergences.
A loop divergence
The diagram in Figure 2 shows one of the several one-loop contributions to electron–electron scattering in QED. The electron on the left side of the diagram, represented by the solid line, starts out with 4-momentum and ends up with 4-momentum . It emits a virtual photon carrying to transfer energy and momentum to the other electron. But in this diagram, before that happens, it emits another virtual photon carrying 4-momentum , and it reabsorbs this one after emitting the other virtual photon. Energy and momentum conservation do not determine the 4-momentum uniquely, so all possibilities contribute equally and we must integrate.
This diagram's amplitude ends up with, among other things, a factor from the loop of
The various factors in this expression are gamma matrices as in the covariant formulation of the Dirac equation; they have to do with the spin of the electron. The factors of are the electric coupling constant, while the provide a heuristic definition of the contour of integration around the poles in the space of momenta. The important part for our purposes is the dependency on of the three big factors in the integrand, which are from the propagators of the two electron lines and the photon line in the loop.
This has a piece with two powers of on top that dominates at large values of (Pokorski 1987, p. 122):
This integral is divergent and infinite, unless we cut it off at finite energy and momentum in some way.
Similar loop divergences occur in other quantum field theories.
Renormalized and bare quantities
The solution was to realize that the quantities initially appearing in the theory's formulae (such as the formula for the Lagrangian), representing such things as the electron's electric charge and mass, as well as the normalizations of the quantum fields themselves, did not actually correspond to the physical constants measured in the laboratory. As written, they were bare quantities that did not take into account the contribution of virtual-particle loop effects to the physical constants themselves. Among other things, these effects would include the quantum counterpart of the electromagnetic back-reaction that so vexed classical theorists of electromagnetism. In general, these effects would be just as divergent as the amplitudes under consideration in the first place; so finite measured quantities would, in general, imply divergent bare quantities.
To make contact with reality, then, the formulae would have to be rewritten in terms of measurable, renormalized quantities. The charge of the electron, say, would be defined in terms of a quantity measured at a specific kinematic renormalization point or subtraction point (which will generally have a characteristic energy, called the renormalization scale or simply the energy scale). The parts of the Lagrangian left over, involving the remaining portions of the bare quantities, could then be reinterpreted as counterterms, involved in divergent diagrams exactly canceling out the troublesome divergences for other diagrams.
Renormalization in QED
For example, in the Lagrangian of QED
the fields and coupling constant are really bare quantities, hence the subscript above. Conventionally the bare quantities are written so that the corresponding Lagrangian terms are multiples of the renormalized ones:
Gauge invariance, via a Ward–Takahashi identity, turns out to imply that we can renormalize the two terms of the covariant derivative piece
together (Pokorski 1987, p. 115), which is what happened to ; it is the same as .
A term in this Lagrangian, for example, the electron–photon interaction pictured in Figure 1, can then be written
The physical constant , the electron's charge, can then be defined in terms of some specific experiment: we set the renormalization scale equal to the energy characteristic of this experiment, and the first term gives the interaction we see in the laboratory (up to small, finite corrections from loop diagrams, providing such exotica as the high-order corrections to the magnetic moment). The rest is the counterterm. If the theory is renormalizable (see below for more on this), as it is in QED, the divergent parts of loop diagrams can all be decomposed into pieces with three or fewer legs, with an algebraic form that can be canceled out by the second term (or by the similar counterterms that come from and ).
The diagram with the counterterm's interaction vertex placed as in Figure 3 cancels out the divergence from the loop in Figure 2.
Historically, the splitting of the "bare terms" into the original terms and counterterms came before the renormalization group insight due to Kenneth Wilson. According to such renormalization group insights, detailed in the next section, this splitting is unnatural and actually unphysical, as all scales of the problem enter in continuous systematic ways.
Running couplings
To minimize the contribution of loop diagrams to a given calculation (and therefore make it easier to extract results), one chooses a renormalization point close to the energies and momenta exchanged in the interaction. However, the renormalization point is not itself a physical quantity: the physical predictions of the theory, calculated to all orders, should in principle be independent of the choice of renormalization point, as long as it is within the domain of application of the theory. Changes in renormalization scale will simply affect how much of a result comes from Feynman diagrams without loops, and how much comes from the remaining finite parts of loop diagrams. One can exploit this fact to calculate the effective variation of physical constants with changes in scale. This variation is encoded by beta-functions, and the general theory of this kind of scale-dependence is known as the renormalization group.
Colloquially, particle physicists often speak of certain physical "constants" as varying with the energy of interaction, though in fact, it is the renormalization scale that is the independent quantity. This running does, however, provide a convenient means of describing changes in the behavior of a field theory under changes in the energies involved in an interaction. For example, since the coupling in quantum chromodynamics becomes small at large energy scales, the theory behaves more like a free theory as the energy exchanged in an interaction becomes large – a phenomenon known as asymptotic freedom. Choosing an increasing energy scale and using the renormalization group makes this clear from simple Feynman diagrams; were this not done, the prediction would be the same, but would arise from complicated high-order cancellations.
For example,
is ill-defined.
To eliminate the divergence, simply change lower limit of integral into and :
Making sure , then
Regularization
Since the quantity is ill-defined, in order to make this notion of canceling divergences precise, the divergences first have to be tamed mathematically using the theory of limits, in a process known as regularization (Weinberg, 1995).
An essentially arbitrary modification to the loop integrands, or regulator, can make them drop off faster at high energies and momenta, in such a manner that the integrals converge. A regulator has a characteristic energy scale known as the cutoff; taking this cutoff to infinity (or, equivalently, the corresponding length/time scale to zero) recovers the original integrals.
With the regulator in place, and a finite value for the cutoff, divergent terms in the integrals then turn into finite but cutoff-dependent terms. After canceling out these terms with the contributions from cutoff-dependent counterterms, the cutoff is taken to infinity and finite physical results recovered. If physics on scales we can measure is independent of what happens at the very shortest distance and time scales, then it should be possible to get cutoff-independent results for calculations.
Many different types of regulator are used in quantum field theory calculations, each with its advantages and disadvantages. One of the most popular in modern use is dimensional regularization, invented by Gerardus 't Hooft and Martinus J. G. Veltman, which tames the integrals by carrying them into a space with a fictitious fractional number of dimensions. Another is Pauli–Villars regularization, which adds fictitious particles to the theory with very large masses, such that loop integrands involving the massive particles cancel out the existing loops at large momenta.
Yet another regularization scheme is the lattice regularization, introduced by Kenneth Wilson, which pretends that hyper-cubical lattice constructs our spacetime with fixed grid size. This size is a natural cutoff for the maximal momentum that a particle could possess when propagating on the lattice. And after doing a calculation on several lattices with different grid size, the physical result is extrapolated to grid size 0, or our natural universe. This presupposes the existence of a scaling limit.
A rigorous mathematical approach to renormalization theory is the so-called causal perturbation theory, where ultraviolet divergences are avoided from the start in calculations by performing well-defined mathematical operations only within the framework of distribution theory. In this approach, divergences are replaced by ambiguity: corresponding to a divergent diagram is a term which now has a finite, but undetermined, coefficient. Other principles, such as gauge symmetry, must then be used to reduce or eliminate the ambiguity.
Attitudes and interpretation
The early formulators of QED and other quantum field theories were, as a rule, dissatisfied with this state of affairs. It seemed illegitimate to do something tantamount to subtracting infinities from infinities to get finite answers.
Freeman Dyson argued that these infinities are of a basic nature and cannot be eliminated by any formal mathematical procedures, such as the renormalization method.
Dirac's criticism was the most persistent. As late as 1975, he was saying:
Most physicists are very satisfied with the situation. They say: 'Quantum electrodynamics is a good theory and we do not have to worry about it any more.' I must say that I am very dissatisfied with the situation because this so-called 'good theory' does involve neglecting infinities which appear in its equations, ignoring them in an arbitrary way. This is just not sensible mathematics. Sensible mathematics involves disregarding a quantity when it is small – not neglecting it just because it is infinitely great and you do not want it!
Another important critic was Feynman. Despite his crucial role in the development of quantum electrodynamics, he wrote the following in 1985:
The shell game that we play to find n and j is technically called 'renormalization'. But no matter how clever the word, it is still what I would call a dippy process! Having to resort to such hocus-pocus has prevented us from proving that the theory of quantum electrodynamics is mathematically self-consistent. It's surprising that the theory still hasn't been proved self-consistent one way or the other by now; I suspect that renormalization is not mathematically legitimate.
Feynman was concerned that all field theories known in the 1960s had the property that the interactions become infinitely strong at short enough distance scales. This property called a Landau pole, made it plausible that quantum field theories were all inconsistent. In 1974, Gross, Politzer and Wilczek showed that another quantum field theory, quantum chromodynamics, does not have a Landau pole. Feynman, along with most others, accepted that QCD was a fully consistent theory.
The general unease was almost universal in texts up to the 1970s and 1980s. Beginning in the 1970s, however, inspired by work on the renormalization group and effective field theory, and despite the fact that Dirac and various others—all of whom belonged to the older generation—never withdrew their criticisms, attitudes began to change, especially among younger theorists. Kenneth G. Wilson and others demonstrated that the renormalization group is useful in statistical field theory applied to condensed matter physics, where it provides important insights into the behavior of phase transitions. In condensed matter physics, a physical short-distance regulator exists: matter ceases to be continuous on the scale of atoms. Short-distance divergences in condensed matter physics do not present a philosophical problem since the field theory is only an effective, smoothed-out representation of the behavior of matter anyway; there are no infinities since the cutoff is always finite, and it makes perfect sense that the bare quantities are cutoff-dependent.
If QFT holds all the way down past the Planck length (where it might yield to string theory, causal set theory or something different), then there may be no real problem with short-distance divergences in particle physics either; all field theories could simply be effective field theories. In a sense, this approach echoes the older attitude that the divergences in QFT speak of human ignorance about the workings of nature, but also acknowledges that this ignorance can be quantified and that the resulting effective theories remain useful.
Be that as it may, Salam's remark in 1972 seems still relevant
Field-theoretic infinities – first encountered in Lorentz's computation of electron self-mass – have persisted in classical electrodynamics for seventy and in quantum electrodynamics for some thirty-five years. These long years of frustration have left in the subject a curious affection for the infinities and a passionate belief that they are an inevitable part of nature; so much so that even the suggestion of a hope that they may, after all, be circumvented — and finite values for the renormalization constants computed – is considered irrational. Compare Russell's postscript to the third volume of his autobiography The Final Years, 1944–1969 (George Allen and Unwin, Ltd., London 1969), p. 221:
In the modern world, if communities are unhappy, it is often because they have ignorances, habits, beliefs, and passions, which are dearer to them than happiness or even life. I find many men in our dangerous age who seem to be in love with misery and death, and who grow angry when hopes are suggested to them. They think hope is irrational and that, in sitting down to lazy despair, they are merely facing facts.
In QFT, the value of a physical constant, in general, depends on the scale that one chooses as the renormalization point, and it becomes very interesting to examine the renormalization group running of physical constants under changes in the energy scale. The coupling constants in the Standard Model of particle physics vary in different ways with increasing energy scale: the coupling of quantum chromodynamics and the weak isospin coupling of the electroweak force tend to decrease, and the weak hypercharge coupling of the electroweak force tends to increase. At the colossal energy scale of 1015 GeV (far beyond the reach of our current particle accelerators), they all become approximately the same size (Grotz and Klapdor 1990, p. 254), a major motivation for speculations about grand unified theory. Instead of being only a worrisome problem, renormalization has become an important theoretical tool for studying the behavior of field theories in different regimes.
If a theory featuring renormalization (e.g. QED) can only be sensibly interpreted as an effective field theory, i.e. as an approximation reflecting human ignorance about the workings of nature, then the problem remains of discovering a more accurate theory that does not have these renormalization problems. As Lewis Ryder has put it, "In the Quantum Theory, these [classical] divergences do not disappear; on the contrary, they appear to get worse. And despite the comparative success of renormalisation theory, the feeling remains that there ought to be a more satisfactory way of doing things."
Renormalizability
From this philosophical reassessment, a new concept follows naturally: the notion of renormalizability. Not all theories lend themselves to renormalization in the manner described above, with a finite supply of counterterms and all quantities becoming cutoff-independent at the end of the calculation. If the Lagrangian contains combinations of field operators of high enough dimension in energy units, the counterterms required to cancel all divergences proliferate to infinite number, and, at first glance, the theory would seem to gain an infinite number of free parameters and therefore lose all predictive power, becoming scientifically worthless. Such theories are called nonrenormalizable.
The Standard Model of particle physics contains only renormalizable operators, but the interactions of general relativity become nonrenormalizable operators if one attempts to construct a field theory of quantum gravity in the most straightforward manner (treating the metric in the Einstein–Hilbert Lagrangian as a perturbation about the Minkowski metric), suggesting that perturbation theory is not satisfactory in application to quantum gravity.
However, in an effective field theory, "renormalizability" is, strictly speaking, a misnomer. In nonrenormalizable effective field theory, terms in the Lagrangian do multiply to infinity, but have coefficients suppressed by ever-more-extreme inverse powers of the energy cutoff. If the cutoff is a real, physical quantity—that is, if the theory is only an effective description of physics up to some maximum energy or minimum distance scale—then these additional terms could represent real physical interactions. Assuming that the dimensionless constants in the theory do not get too large, one can group calculations by inverse powers of the cutoff, and extract approximate predictions to finite order in the cutoff that still have a finite number of free parameters. It can even be useful to renormalize these "nonrenormalizable" interactions.
Nonrenormalizable interactions in effective field theories rapidly become weaker as the energy scale becomes much smaller than the cutoff. The classic example is the Fermi theory of the weak nuclear force, a nonrenormalizable effective theory whose cutoff is comparable to the mass of the W particle. This fact may also provide a possible explanation for why almost all of the particle interactions we see are describable by renormalizable theories. It may be that any others that may exist at the GUT or Planck scale simply become too weak to detect in the realm we can observe, with one exception: gravity, whose exceedingly weak interaction is magnified by the presence of the enormous masses of stars and planets.
Renormalization schemes
In actual calculations, the counterterms introduced to cancel the divergences in Feynman diagram calculations beyond tree level must be fixed using a set of renormalisation conditions. The common renormalization schemes in use include:
Minimal subtraction (MS) scheme and the related modified minimal subtraction (MS-bar) scheme
On-shell scheme
Besides, there exists a "natural" definition of the renormalized coupling (combined with the photon propagator) as a propagator of dual free bosons, which does not explicitly require introducing the counterterms.
In statistical physics
History
A deeper understanding of the physical meaning and generalization of the
renormalization process, which goes beyond the dilatation group of conventional renormalizable theories, came from condensed matter physics. Leo P. Kadanoff's paper in 1966 proposed the "block-spin" renormalization group. The blocking idea is a way to define the components of the theory at large distances as aggregates of components at shorter distances.
This approach covered the conceptual point and was given full computational substance in the extensive important contributions of Kenneth Wilson. The power of Wilson's ideas was demonstrated by a constructive iterative renormalization solution of a long-standing problem, the Kondo problem, in 1974, as well as the preceding seminal developments of his new method in the theory of second-order phase transitions and critical phenomena in 1971. He was awarded the Nobel prize for these decisive contributions in 1982.
Principles
In more technical terms, let us assume that we have a theory described
by a certain function of the state variables
and a certain set of coupling constants
. This function may be a partition function,
an action, a Hamiltonian, etc. It must contain the
whole description of the physics of the system.
Now we consider a certain blocking transformation of the state
variables ,
the number of must be lower than the number of
. Now let us try to rewrite the
function only in terms of the . If this is achievable by a
certain change in the parameters, , then the theory is said to be
renormalizable.
The possible
macroscopic states of the system, at a large scale, are given by this
set of fixed points.
Renormalization group fixed points
The most important information in the RG flow is its fixed points. A fixed point is defined by the vanishing of the beta function associated to the flow. Then, fixed points of the renormalization group are by definition scale invariant. In many cases of physical interest scale invariance enlarges to conformal invariance. One then has a conformal field theory at the fixed point.
The ability of several theories to flow to the same fixed point leads to universality.
If these fixed points correspond to free field theory, the theory is said to exhibit quantum triviality. Numerous fixed points appear in the study of lattice Higgs theories, but the nature of the quantum field theories associated with these remains an open question.
See also
History of quantum field theory
Quantum triviality
Zeno's paradoxes
Nonoblique correction
References
Further reading
General introduction
DeDeo, Simon; Introduction to Renormalization (2017). Santa Fe Institute Complexity Explorer MOOC. Renormalization from a complex systems point of view, including Markov Chains, Cellular Automata, the real space Ising model, the Krohn-Rhodes Theorem, QED, and rate distortion theory.
Baez, John; Renormalization Made Easy, (2005). A qualitative introduction to the subject.
Blechman, Andrew E.; Renormalization: Our Greatly Misunderstood Friend, (2002). Summary of a lecture; has more information about specific regularization and divergence-subtraction schemes.
Shirkov, Dmitry; Fifty Years of the Renormalization Group, C.E.R.N. Courrier 41(7) (2001). Full text available at : I.O.P Magazines.
E. Elizalde; Zeta regularization techniques with Applications.
Mainly: quantum field theory
N. N. Bogoliubov, D. V. Shirkov (1959): The Theory of Quantized Fields. New York, Interscience. The first text-book on the renormalization group theory.
Ryder, Lewis H.; Quantum Field Theory (Cambridge University Press, 1985), Highly readable textbook, certainly the best introduction to relativistic Q.F.T. for particle physics.
Zee, Anthony; Quantum Field Theory in a Nutshell, Princeton University Press (2003) . Another excellent textbook on Q.F.T.
Weinberg, Steven; The Quantum Theory of Fields (3 volumes) Cambridge University Press (1995). A monumental treatise on Q.F.T. written by a leading expert, Nobel laureate 1979.
Pokorski, Stefan; Gauge Field Theories, Cambridge University Press (1987) .
't Hooft, Gerard; The Glorious Days of Physics – Renormalization of Gauge theories, lecture given at Erice (August/September 1998) by the Nobel laureate 1999 . Full text available at: hep-th/9812203.
Rivasseau, Vincent; An introduction to renormalization, Poincaré Seminar (Paris, Oct. 12, 2002), published in : Duplantier, Bertrand; Rivasseau, Vincent (Eds.); Poincaré Seminar 2002, Progress in Mathematical Physics 30, Birkhäuser (2003) . Full text available in PostScript.
Rivasseau, Vincent; From perturbative to constructive renormalization, Princeton University Press (1991) . Full text available in PostScript and in PDF (draft version).
Iagolnitzer, Daniel & Magnen, J.; Renormalization group analysis, Encyclopaedia of Mathematics, Kluwer Academic Publisher (1996). Full text available in PostScript and pdf here.
Scharf, Günter; Finite quantum electrodynamics: The causal approach, Springer Verlag Berlin Heidelberg New York (1995) .
A. S. Švarc (Albert Schwarz), Математические основы квантовой теории поля, (Mathematical aspects of quantum field theory), Atomizdat, Moscow, 1975. 368 pp.
Mainly: statistical physics
A. N. Vasil'ev; The Field Theoretic Renormalization Group in Critical Behavior Theory and Stochastic Dynamics (Routledge Chapman & Hall 2004);
Nigel Goldenfeld; Lectures on Phase Transitions and the Renormalization Group, Frontiers in Physics 85, Westview Press (June, 1992) . Covering the elementary aspects of the physics of phases transitions and the renormalization group, this popular book emphasizes understanding and clarity rather than technical manipulations.
Zinn-Justin, Jean; Quantum Field Theory and Critical Phenomena, Oxford University Press (4th edition – 2002) . A masterpiece on applications of renormalization methods to the calculation of critical exponents in statistical mechanics, following Wilson's ideas (Kenneth Wilson was Nobel laureate 1982).
Zinn-Justin, Jean; Phase Transitions & Renormalization Group: from Theory to Numbers, Poincaré Seminar (Paris, Oct. 12, 2002), published in : Duplantier, Bertrand; Rivasseau, Vincent (Eds.); Poincaré Seminar 2002, Progress in Mathematical Physics 30, Birkhäuser (2003) . Full text available in PostScript .
Domb, Cyril; The Critical Point: A Historical Introduction to the Modern Theory of Critical Phenomena, CRC Press (March, 1996) .
Brown, Laurie M. (Ed.); Renormalization: From Lorentz to Landau (and Beyond), Springer-Verlag (New York-1993) .
Cardy, John; Scaling and Renormalization in Statistical Physics, Cambridge University Press (1996) .
Miscellaneous
Shirkov, Dmitry; The Bogoliubov Renormalization Group, JINR Communication E2-96-15 (1996). Full text available at: hep-th/9602024
Zinn-Justin, Jean; Renormalization and renormalization group: From the discovery of UV divergences to the concept of effective field theories, in: de Witt-Morette C., Zuber J.-B. (eds), Proceedings of the NATO ASI on Quantum Field Theory: Perspective and Prospective, June 15–26, 1998, Les Houches, France, Kluwer Academic Publishers, NATO ASI Series C 530, 375–388 (1999). Full text available in PostScript.
Connes, Alain; Symétries Galoisiennes & Renormalisation, Poincaré Seminar (Paris, Oct. 12, 2002), published in : Duplantier, Bertrand; Rivasseau, Vincent (Eds.); Poincaré Seminar 2002, Progress in Mathematical Physics 30, Birkhäuser (2003) . French mathematician Alain Connes (Fields medallist 1982) describe the mathematical underlying structure (the Hopf algebra) of renormalization, and its link to the Riemann-Hilbert problem. Full text (in French) available at .
External links
Quantum field theory
Renormalization group
Mathematical physics | 0.776821 | 0.994558 | 0.772594 |
Cell biology | Cell biology (also cellular biology or cytology) is a branch of biology that studies the structure, function, and behavior of cells. All living organisms are made of cells. A cell is the basic unit of life that is responsible for the living and functioning of organisms. Cell biology is the study of the structural and functional units of cells. Cell biology encompasses both prokaryotic and eukaryotic cells and has many subtopics which may include the study of cell metabolism, cell communication, cell cycle, biochemistry, and cell composition. The study of cells is performed using several microscopy techniques, cell culture, and cell fractionation. These have allowed for and are currently being used for discoveries and research pertaining to how cells function, ultimately giving insight into understanding larger organisms. Knowing the components of cells and how cells work is fundamental to all biological sciences while also being essential for research in biomedical fields such as cancer, and other diseases. Research in cell biology is interconnected to other fields such as genetics, molecular genetics, molecular biology, medical microbiology, immunology, and cytochemistry.
History
Cells were first seen in 17th-century Europe with the invention of the compound microscope. In 1665, Robert Hooke referred to the building blocks of all living organisms as "cells" (published in Micrographia) after looking at a piece of cork and observing a structure reminiscent of a monastic cell; however, the cells were dead. They gave no indication to the actual overall components of a cell. A few years later, in 1674, Anton Van Leeuwenhoek was the first to analyze live cells in his examination of algae. Many years later, in 1831, Robert Brown discovered the nucleus. All of this preceded the cell theory which states that all living things are made up of cells and that cells are organisms' functional and structural units. This was ultimately concluded by plant scientist Matthias Schleiden and animal scientist Theodor Schwann in 1838, who viewed live cells in plant and animal tissue, respectively. 19 years later, Rudolf Virchow further contributed to the cell theory, adding that all cells come from the division of pre-existing cells. Viruses are not considered in cell biology – they lack the characteristics of a living cell and instead are studied in the microbiology subclass of virology.
Techniques
Cell biology research looks at different ways to culture and manipulate cells outside of a living body to further research in human anatomy and physiology, and to derive medications.The techniques by which cells are studied have evolved. Due to advancements in microscopy, techniques and technology have allowed scientists to hold a better understanding of the structure and function of cells. Many techniques commonly used to study cell biology are listed below:
Cell culture: Utilizes rapidly growing cells on media which allows for a large amount of a specific cell type and an efficient way to study cells. Cell culture is one of the major tools used in cellular and molecular biology, providing excellent model systems for studying the normal physiology and biochemistry of cells (e.g., metabolic studies, aging), the effects of drugs and toxic compounds on the cells, and mutagenesis and carcinogenesis. It is also used in drug screening and development, and large scale manufacturing of biological compounds (e.g., vaccines, therapeutic proteins).
Fluorescence microscopy: Fluorescent markers such as GFP, are used to label a specific component of the cell. Afterwards, a certain light wavelength is used to excite the fluorescent marker which can then be visualized.
Phase-contrast microscopy: Uses the optical aspect of light to represent the solid, liquid, and gas-phase changes as brightness differences.
Confocal microscopy: Combines fluorescence microscopy with imaging by focusing light and snap shooting instances to form a 3-D image.
Transmission electron microscopy: Involves metal staining and the passing of electrons through the cells, which will be deflected upon interaction with metal. This ultimately forms an image of the components being studied.
Cytometry: The cells are placed in the machine which uses a beam to scatter the cells based on different aspects and can therefore separate them based on size and content. Cells may also be tagged with GFP-fluorescence and can be separated that way as well.
Cell fractionation: This process requires breaking up the cell using high temperature or sonification followed by centrifugation to separate the parts of the cell allowing for them to be studied separately.
Cell types
There are two fundamental classifications of cells: prokaryotic and eukaryotic. Prokaryotic cells are distinguished from eukaryotic cells by the absence of a cell nucleus or other membrane-bound organelle. Prokaryotic cells are much smaller than eukaryotic cells, making them the smallest form of life. Prokaryotic cells include Bacteria and Archaea, and lack an enclosed cell nucleus. Eukaryotic cells are found in plants, animals, fungi, and protists. They range from 10 to 100 μm in diameter, and their DNA is contained within a membrane-bound nucleus. Eukaryotes are organisms containing eukaryotic cells. The four eukaryotic kingdoms are Animalia, Plantae, Fungi, and Protista.
They both reproduce through binary fission. Bacteria, the most prominent type, have several different shapes, although most are spherical or rod-shaped. Bacteria can be classed as either gram-positive or gram-negative depending on the cell wall composition. Gram-positive bacteria have a thicker peptidoglycan layer than gram-negative bacteria. Bacterial structural features include a flagellum that helps the cell to move, ribosomes for the translation of RNA to protein, and a nucleoid that holds all the genetic material in a circular structure. There are many processes that occur in prokaryotic cells that allow them to survive. In prokaryotes, mRNA synthesis is initiated at a promoter sequence on the DNA template comprising two consensus sequences that recruit RNA polymerase. The prokaryotic polymerase consists of a core enzyme of four protein subunits and a σ protein that assists only with initiation. For instance, in a process termed conjugation, the fertility factor allows the bacteria to possess a pilus which allows it to transmit DNA to another bacteria which lacks the F factor, permitting the transmittance of resistance allowing it to survive in certain environments.
Structure and function
Structure of eukaryotic cells
Eukaryotic cells are composed of the following organelles:
Nucleus: The nucleus of the cell functions as the genome and genetic information storage for the cell, containing all the DNA organized in the form of chromosomes. It is surrounded by a nuclear envelope, which includes nuclear pores allowing for the transportation of proteins between the inside and outside of the nucleus. This is also the site for replication of DNA as well as transcription of DNA to RNA. Afterwards, the RNA is modified and transported out to the cytosol to be translated to protein.
Nucleolus: This structure is within the nucleus, usually dense and spherical. It is the site of ribosomal RNA (rRNA) synthesis, which is needed for ribosomal assembly.
Endoplasmic reticulum (ER): This functions to synthesize, store, and secrete proteins to the Golgi apparatus. Structurally, the endoplasmic reticulum is a network of membranes found throughout the cell and connected to the nucleus. The membranes are slightly different from cell to cell and a cell's function determines the size and structure of the ER.
Mitochondria: Commonly known as the powerhouse of the cell is a double membrane bound cell organelle. This functions for the production of energy or ATP within the cell. Specifically, this is the place where the Krebs cycle or TCA cycle for the production of NADH and FADH occurs. Afterwards, these products are used within the electron transport chain (ETC) and oxidative phosphorylation for the final production of ATP.
Golgi apparatus: This functions to further process, package, and secrete the proteins to their destination. The proteins contain a signal sequence that allows the Golgi apparatus to recognize and direct it to the correct place. Golgi apparatus also produce glycoproteins and glycolipids.
Lysosome: The lysosome functions to degrade material brought in from the outside of the cell or old organelles. This contains many acid hydrolases, proteases, nucleases, and lipases, which break down the various molecules. Autophagy is the process of degradation through lysosomes which occurs when a vesicle buds off from the ER and engulfs the material, then, attaches and fuses with the lysosome to allow the material to be degraded.
Ribosomes: Functions to translate RNA to protein. it serves as a site of protein synthesis.
Cytoskeleton: Cytoskeleton is a structure that helps to maintain the shape and general organization of the cytoplasm. It anchors organelles within the cells and makes up the structure and stability of the cell. The cytoskeleton is composed of three principal types of protein filaments: actin filaments, intermediate filaments, and microtubules, which are held together and linked to subcellular organelles and the plasma membrane by a variety of accessory proteins.
Cell membrane: The cell membrane can be described as a phospholipid bilayer and is also consisted of lipids and proteins. Because the inside of the bilayer is hydrophobic and in order for molecules to participate in reactions within the cell, they need to be able to cross this membrane layer to get into the cell via osmotic pressure, diffusion, concentration gradients, and membrane channels.
Centrioles: Function to produce spindle fibers which are used to separate chromosomes during cell division.
Eukaryotic cells may also be composed of the following molecular components:
Chromatin: This makes up chromosomes and is a mixture of DNA with various proteins.
Cilia: They help to propel substances and can also be used for sensory purposes.
Cell metabolism
Cell metabolism is necessary for the production of energy for the cell and therefore its survival and includes many pathways and also sustaining the main cell organelles such as the nucleus, the mitochondria, the cell membrane etc. For cellular respiration, once glucose is available, glycolysis occurs within the cytosol of the cell to produce pyruvate. Pyruvate undergoes decarboxylation using the multi-enzyme complex to form acetyl coA which can readily be used in the TCA cycle to produce NADH and FADH2. These products are involved in the electron transport chain to ultimately form a proton gradient across the inner mitochondrial membrane. This gradient can then drive the production of ATP and during oxidative phosphorylation. Metabolism in plant cells includes photosynthesis which is simply the exact opposite of respiration as it ultimately produces molecules of glucose.
Cell signaling
Cell signaling or cell communication is important for cell regulation and for cells to process information from the environment and respond accordingly. Signaling can occur through direct cell contact or endocrine, paracrine, and autocrine signaling. Direct cell-cell contact is when a receptor on a cell binds a molecule that is attached to the membrane of another cell. Endocrine signaling occurs through molecules secreted into the bloodstream. Paracrine signaling uses molecules diffusing between two cells to communicate. Autocrine is a cell sending a signal to itself by secreting a molecule that binds to a receptor on its surface. Forms of communication can be through:
Ion channels: Can be of different types such as voltage or ligand gated ion channels. They allow for the outflow and inflow of molecules and ions.
G-protein coupled receptor (GPCR): Is widely recognized to contain seven transmembrane domains. The ligand binds on the extracellular domain and once the ligand binds, this signals a guanine exchange factor to convert GDP to GTP and activate the G-α subunit. G-α can target other proteins such as adenyl cyclase or phospholipase C, which ultimately produce secondary messengers such as cAMP, Ip3, DAG, and calcium. These secondary messengers function to amplify signals and can target ion channels or other enzymes. One example for amplification of a signal is cAMP binding to and activating PKA by removing the regulatory subunits and releasing the catalytic subunit. The catalytic subunit has a nuclear localization sequence which prompts it to go into the nucleus and phosphorylate other proteins to either repress or activate gene activity.
Receptor tyrosine kinases: Bind growth factors, further promoting the tyrosine on the intracellular portion of the protein to cross phosphorylate. The phosphorylated tyrosine becomes a landing pad for proteins containing an SH2 domain allowing for the activation of Ras and the involvement of the MAP kinase pathway.
Growth and development
Eukaryotic cell cycle
Cells are the foundation of all organisms and are the fundamental units of life. The growth and development of cells are essential for the maintenance of the host and survival of the organism. For this process, the cell goes through the steps of the cell cycle and development which involves cell growth, DNA replication, cell division, regeneration, and cell death.
The cell cycle is divided into four distinct phases: G1, S, G2, and M. The G phase – which is the cell growth phase – makes up approximately 95% of the cycle. The proliferation of cells is instigated by progenitors. All cells start out in an identical form and can essentially become any type of cells. Cell signaling such as induction can influence nearby cells to determinate the type of cell it will become. Moreover, this allows cells of the same type to aggregate and form tissues, then organs, and ultimately systems. The G1, G2, and S phase (DNA replication, damage and repair) are considered to be the interphase portion of the cycle, while the M phase (mitosis) is the cell division portion of the cycle. Mitosis is composed of many stages which include, prophase, metaphase, anaphase, telophase, and cytokinesis, respectively. The ultimate result of mitosis is the formation of two identical daughter cells.
The cell cycle is regulated in cell cycle checkpoints, by a series of signaling factors and complexes such as cyclins, cyclin-dependent kinase, and p53. When the cell has completed its growth process and if it is found to be damaged or altered, it undergoes cell death, either by apoptosis or necrosis, to eliminate the threat it can cause to the organism's survival.
Cell mortality, cell lineage immortality
The ancestry of each present day cell presumably traces back, in an unbroken lineage for over 3 billion years to the origin of life. It is not actually cells that are immortal but multi-generational cell lineages. The immortality of a cell lineage depends on the maintenance of cell division potential. This potential may be lost in any particular lineage because of cell damage, terminal differentiation as occurs in nerve cells, or programmed cell death (apoptosis) during development. Maintenance of cell division potential over successive generations depends on the avoidance and the accurate repair of cellular damage, particularly DNA damage. In sexual organisms, continuity of the germline depends on the effectiveness of processes for avoiding DNA damage and repairing those DNA damages that do occur. Sexual processes in eukaryotes, as well as in prokaryotes, provide an opportunity for effective repair of DNA damages in the germ line by homologous recombination.
Cell cycle phases
The cell cycle is a four-stage process that a cell goes through as it develops and divides. It includes Gap 1 (G1), synthesis (S), Gap 2 (G2), and mitosis (M). The cell either restarts the cycle from G1 or leaves the cycle through G0 after completing the cycle. The cell can progress from G0 through terminal differentiation. Finally, the interphase refers to the phases of the cell cycle that occur between one mitosis and the next, and includes G1, S, and G2. Thus, the phases are:
G1 phase: the cell grows in size and its contents are replicated.
S phase: the cell replicates each of the 46 chromosomes.
G2 phase: in preparation for cell division, new organelles and proteins form.
M phase: cytokinesis occurs, resulting in two identical daughter cells.
G0 phase: the two cells enter a resting stage where they do their job without actively preparing to divide.
Pathology
The scientific branch that studies and diagnoses diseases on the cellular level is called cytopathology. Cytopathology is generally used on samples of free cells or tissue fragments, in contrast to the pathology branch of histopathology, which studies whole tissues. Cytopathology is commonly used to investigate diseases involving a wide range of body sites, often to aid in the diagnosis of cancer but also in the diagnosis of some infectious diseases and other inflammatory conditions. For example, a common application of cytopathology is the Pap smear, a screening test used to detect cervical cancer, and precancerous cervical lesions that may lead to cervical cancer.
Cell cycle checkpoints and DNA damage repair system
The cell cycle is composed of a number of well-ordered, consecutive stages that result in cellular division. The fact that cells do not begin the next stage until the last one is finished, is a significant element of cell cycle regulation. Cell cycle checkpoints are characteristics that constitute an excellent monitoring strategy for accurate cell cycle and divisions. Cdks, associated cyclin counterparts, protein kinases, and phosphatases regulate cell growth and division from one stage to another. The cell cycle is controlled by the temporal activation of Cdks, which is governed by cyclin partner interaction, phosphorylation by particular protein kinases, and de-phosphorylation by Cdc25 family phosphatases. In response to DNA damage, a cell's DNA repair reaction is a cascade of signaling pathways that leads to checkpoint engagement, regulates, the repairing mechanism in DNA, cell cycle alterations, and apoptosis. Numerous biochemical structures, as well as processes that detect damage in DNA, are ATM and ATR, which induce the DNA repair checkpoints
The cell cycle is a sequence of activities in which cell organelles are duplicated and subsequently separated into daughter cells with precision. There are major events that happen during a cell cycle. The processes that happen in the cell cycle include cell development, replication and segregation of chromosomes. The cell cycle checkpoints are surveillance systems that keep track of the cell cycle's integrity, accuracy, and chronology. Each checkpoint serves as an alternative cell cycle endpoint, wherein the cell's parameters are examined and only when desirable characteristics are fulfilled does the cell cycle advance through the distinct steps. The cell cycle's goal is to precisely copy each organism's DNA and afterwards equally split the cell and its components between the two new cells. Four main stages occur in the eukaryotes. In G1, the cell is usually active and continues to grow rapidly, while in G2, the cell growth continues while protein molecules become ready for separation. These are not dormant times; they are when cells gain mass, integrate growth factor receptors, establish a replicated genome, and prepare for chromosome segregation. DNA replication is restricted to a separate Synthesis in eukaryotes, which is also known as the S-phase. During mitosis, which is also known as the M-phase, the segregation of the chromosomes occur. DNA, like every other molecule, is capable of undergoing a wide range of chemical reactions. Modifications in DNA's sequence, on the other hand, have a considerably bigger impact than modifications in other cellular constituents like RNAs or proteins because DNA acts as a permanent copy of the cell genome. When erroneous nucleotides are incorporated during DNA replication, mutations can occur. The majority of DNA damage is fixed by removing the defective bases and then re-synthesizing the excised area. On the other hand, some DNA lesions can be mended by reversing the damage, which may be a more effective method of coping with common types of DNA damage. Only a few forms of DNA damage are mended in this fashion, including pyrimidine dimers caused by ultraviolet (UV) light changed by the insertion of methyl or ethyl groups at the purine ring's O6 position.
Mitochondrial membrane dynamics
Mitochondria are commonly referred to as the cell's "powerhouses" because of their capacity to effectively produce ATP which is essential to maintain cellular homeostasis and metabolism. Moreover, researchers have gained a better knowledge of mitochondria's significance in cell biology because of the discovery of cell signaling pathways by mitochondria which are crucial platforms for cell function regulation such as apoptosis. Its physiological adaptability is strongly linked to the cell mitochondrial channel's ongoing reconfiguration through a range of mechanisms known as mitochondrial membrane dynamics, including endomembrane fusion and fragmentation (separation) and ultrastructural membrane remodeling. As a result, mitochondrial dynamics regulate and frequently choreograph not only metabolic but also complicated cell signaling processes such as cell pluripotent stem cells, proliferation, maturation, aging, and mortality. Mutually, post-translational alterations of mitochondrial apparatus and the development of transmembrane contact sites among mitochondria and other structures, which both have the potential to link signals from diverse routes that affect mitochondrial membrane dynamics substantially, Mitochondria are wrapped by two membranes: an inner mitochondrial membrane (IMM) and an outer mitochondrial membrane (OMM), each with a distinctive function and structure, which parallels their dual role as cellular powerhouses and signaling organelles. The inner mitochondrial membrane divides the mitochondrial lumen into two parts: the inner border membrane, which runs parallel to the OMM, and the cristae, which are deeply twisted, multinucleated invaginations that give room for surface area enlargement and house the mitochondrial respiration apparatus. The outer mitochondrial membrane, on the other hand, is soft and permeable. It, therefore, acts as a foundation for cell signaling pathways to congregate, be deciphered, and be transported into mitochondria. Furthermore, the OMM connects to other cellular organelles, such as the endoplasmic reticulum (ER), lysosomes, endosomes, and the plasma membrane. Mitochondria play a wide range of roles in cell biology, which is reflected in their morphological diversity. Ever since the beginning of the mitochondrial study, it has been well documented that mitochondria can have a variety of forms, with both their general and ultra-structural morphology varying greatly among cells, during the cell cycle, and in response to metabolic or cellular cues. Mitochondria can exist as independent organelles or as part of larger systems; they can also be unequally distributed in the cytosol through regulated mitochondrial transport and placement to meet the cell's localized energy requirements. Mitochondrial dynamics refers to the adaptive and variable aspect of mitochondria, including their shape and subcellular distribution.
Autophagy
Autophagy is a self-degradative mechanism that regulates energy sources during growth and reaction to dietary stress. Autophagy also cleans up after itself, clearing aggregated proteins, cleaning damaged structures including mitochondria and endoplasmic reticulum and eradicating intracellular infections. Additionally, autophagy has antiviral and antibacterial roles within the cell, and it is involved at the beginning of distinctive and adaptive immune responses to viral and bacterial contamination. Some viruses include virulence proteins that prevent autophagy, while others utilize autophagy elements for intracellular development or cellular splitting. Macro autophagy, micro autophagy, and chaperon-mediated autophagy are the three basic types of autophagy. When macro autophagy is triggered, an exclusion membrane incorporates a section of the cytoplasm, generating the autophagosome, a distinctive double-membraned organelle. The autophagosome then joins the lysosome to create an autolysosome, with lysosomal enzymes degrading the components. In micro autophagy, the lysosome or vacuole engulfs a piece of the cytoplasm by invaginating or protruding the lysosomal membrane to enclose the cytosol or organelles. The chaperone-mediated autophagy (CMA) protein quality assurance by digesting oxidized and altered proteins under stressful circumstances and supplying amino acids through protein denaturation. Autophagy is the primary intrinsic degradative system for peptides, fats, carbohydrates, and other cellular structures. In both physiologic and stressful situations, this cellular progression is vital for upholding the correct cellular balance. Autophagy instability leads to a variety of illness symptoms, including inflammation, biochemical disturbances, aging, and neurodegenerative, due to its involvement in controlling cell integrity. The modification of the autophagy-lysosomal networks is a typical hallmark of many neurological and muscular illnesses. As a result, autophagy has been identified as a potential strategy for the prevention and treatment of various disorders. Many of these disorders are prevented or improved by consuming polyphenol in the meal. As a result, natural compounds with the ability to modify the autophagy mechanism are seen as a potential therapeutic option. The creation of the double membrane (phagophore), which would be known as nucleation, is the first step in macro-autophagy. The phagophore approach indicates dysregulated polypeptides or defective organelles that come from the cell membrane, Golgi apparatus, endoplasmic reticulum, and mitochondria. With the conclusion of the autophagocyte, the phagophore's enlargement comes to an end. The auto-phagosome combines with the lysosomal vesicles to formulate an auto-lysosome that degrades the encapsulated substances, referred to as phagocytosis.
Notable cell biologists
Jean Baptiste Carnoy
Peter Agre
Günter Blobel
Robert Brown
Geoffrey M. Cooper
Christian de Duve
Henri Dutrochet
Robert Hooke
H. Robert Horvitz
Marc Kirschner
Anton van Leeuwenhoek
Ira Mellman
Marta Miączyńska
Peter D. Mitchell
Rudolf Virchow
Paul Nurse
George Emil Palade
Keith R. Porter
Ray Rappaport
Michael Swann
Roger Tsien
Edmund Beecher Wilson
Kenneth R. Miller
Matthias Jakob Schleiden
Theodor Schwann
Yoshinori Ohsumi
Jan Evangelista Purkyně
See also
The American Society for Cell Biology
Cell biophysics
Cell disruption
Cell physiology
Cellular adaptation
Cellular microbiology
Institute of Molecular and Cell Biology (disambiguation)
Meiomitosis
Organoid
Outline of cell biology
Notes
References
electronic-book electronic-
Cell and Molecular Biology by Karp 5th Ed.,
External links
Aging Cell
"Francis Harry Compton Crick (1916–2004)" by A. Andrei at the Embryo Project Encyclopedia
"Biology Resource By Professor Lin." | 0.774578 | 0.99739 | 0.772557 |
Quantum mechanics | Quantum mechanics is a fundamental theory that describes the behavior of nature at and below the scale of atoms. It is the foundation of all quantum physics, which includes quantum chemistry, quantum field theory, quantum technology, and quantum information science.
Quantum mechanics can describe many systems that classical physics cannot. Classical physics can describe many aspects of nature at an ordinary (macroscopic and (optical) microscopic) scale, but is not sufficient for describing them at very small submicroscopic (atomic and subatomic) scales. Most theories in classical physics can be derived from quantum mechanics as an approximation, valid at large (macroscopic/microscopic) scale.
Quantum systems have bound states that are quantized to discrete values of energy, momentum, angular momentum, and other quantities, in contrast to classical systems where these quantities can be measured continuously. Measurements of quantum systems show characteristics of both particles and waves (wave–particle duality), and there are limits to how accurately the value of a physical quantity can be predicted prior to its measurement, given a complete set of initial conditions (the uncertainty principle).
Quantum mechanics arose gradually from theories to explain observations that could not be reconciled with classical physics, such as Max Planck's solution in 1900 to the black-body radiation problem, and the correspondence between energy and frequency in Albert Einstein's 1905 paper, which explained the photoelectric effect. These early attempts to understand microscopic phenomena, now known as the "old quantum theory", led to the full development of quantum mechanics in the mid-1920s by Niels Bohr, Erwin Schrödinger, Werner Heisenberg, Max Born, Paul Dirac and others. The modern theory is formulated in various specially developed mathematical formalisms. In one of them, a mathematical entity called the wave function provides information, in the form of probability amplitudes, about what measurements of a particle's energy, momentum, and other physical properties may yield.
Overview and fundamental concepts
Quantum mechanics allows the calculation of properties and behaviour of physical systems. It is typically applied to microscopic systems: molecules, atoms and sub-atomic particles. It has been demonstrated to hold for complex molecules with thousands of atoms, but its application to human beings raises philosophical problems, such as Wigner's friend, and its application to the universe as a whole remains speculative. Predictions of quantum mechanics have been verified experimentally to an extremely high degree of accuracy. For example, the refinement of quantum mechanics for the interaction of light and matter, known as quantum electrodynamics (QED), has been shown to agree with experiment to within 1 part in 1012 when predicting the magnetic properties of an electron.
A fundamental feature of the theory is that it usually cannot predict with certainty what will happen, but only give probabilities. Mathematically, a probability is found by taking the square of the absolute value of a complex number, known as a probability amplitude. This is known as the Born rule, named after physicist Max Born. For example, a quantum particle like an electron can be described by a wave function, which associates to each point in space a probability amplitude. Applying the Born rule to these amplitudes gives a probability density function for the position that the electron will be found to have when an experiment is performed to measure it. This is the best the theory can do; it cannot say for certain where the electron will be found. The Schrödinger equation relates the collection of probability amplitudes that pertain to one moment of time to the collection of probability amplitudes that pertain to another.
One consequence of the mathematical rules of quantum mechanics is a tradeoff in predictability between measurable quantities. The most famous form of this uncertainty principle says that no matter how a quantum particle is prepared or how carefully experiments upon it are arranged, it is impossible to have a precise prediction for a measurement of its position and also at the same time for a measurement of its momentum.
Another consequence of the mathematical rules of quantum mechanics is the phenomenon of quantum interference, which is often illustrated with the double-slit experiment. In the basic version of this experiment, a coherent light source, such as a laser beam, illuminates a plate pierced by two parallel slits, and the light passing through the slits is observed on a screen behind the plate. The wave nature of light causes the light waves passing through the two slits to interfere, producing bright and dark bands on the screen – a result that would not be expected if light consisted of classical particles. However, the light is always found to be absorbed at the screen at discrete points, as individual particles rather than waves; the interference pattern appears via the varying density of these particle hits on the screen. Furthermore, versions of the experiment that include detectors at the slits find that each detected photon passes through one slit (as would a classical particle), and not through both slits (as would a wave). However, such experiments demonstrate that particles do not form the interference pattern if one detects which slit they pass through. This behavior is known as wave–particle duality. In addition to light, electrons, atoms, and molecules are all found to exhibit the same dual behavior when fired towards a double slit.
Another non-classical phenomenon predicted by quantum mechanics is quantum tunnelling: a particle that goes up against a potential barrier can cross it, even if its kinetic energy is smaller than the maximum of the potential. In classical mechanics this particle would be trapped. Quantum tunnelling has several important consequences, enabling radioactive decay, nuclear fusion in stars, and applications such as scanning tunnelling microscopy, tunnel diode and tunnel field-effect transistor.
When quantum systems interact, the result can be the creation of quantum entanglement: their properties become so intertwined that a description of the whole solely in terms of the individual parts is no longer possible. Erwin Schrödinger called entanglement "...the characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought". Quantum entanglement enables quantum computing and is part of quantum communication protocols, such as quantum key distribution and superdense coding. Contrary to popular misconception, entanglement does not allow sending signals faster than light, as demonstrated by the no-communication theorem.
Another possibility opened by entanglement is testing for "hidden variables", hypothetical properties more fundamental than the quantities addressed in quantum theory itself, knowledge of which would allow more exact predictions than quantum theory provides. A collection of results, most significantly Bell's theorem, have demonstrated that broad classes of such hidden-variable theories are in fact incompatible with quantum physics. According to Bell's theorem, if nature actually operates in accord with any theory of local hidden variables, then the results of a Bell test will be constrained in a particular, quantifiable way. Many Bell tests have been performed and they have shown results incompatible with the constraints imposed by local hidden variables.
It is not possible to present these concepts in more than a superficial way without introducing the mathematics involved; understanding quantum mechanics requires not only manipulating complex numbers, but also linear algebra, differential equations, group theory, and other more advanced subjects. Accordingly, this article will present a mathematical formulation of quantum mechanics and survey its application to some useful and oft-studied examples.
Mathematical formulation
In the mathematically rigorous formulation of quantum mechanics, the state of a quantum mechanical system is a vector belonging to a (separable) complex Hilbert space . This vector is postulated to be normalized under the Hilbert space inner product, that is, it obeys , and it is well-defined up to a complex number of modulus 1 (the global phase), that is, and represent the same physical system. In other words, the possible states are points in the projective space of a Hilbert space, usually called the complex projective space. The exact nature of this Hilbert space is dependent on the system – for example, for describing position and momentum the Hilbert space is the space of complex square-integrable functions , while the Hilbert space for the spin of a single proton is simply the space of two-dimensional complex vectors with the usual inner product.
Physical quantities of interestposition, momentum, energy, spinare represented by observables, which are Hermitian (more precisely, self-adjoint) linear operators acting on the Hilbert space. A quantum state can be an eigenvector of an observable, in which case it is called an eigenstate, and the associated eigenvalue corresponds to the value of the observable in that eigenstate. More generally, a quantum state will be a linear combination of the eigenstates, known as a quantum superposition. When an observable is measured, the result will be one of its eigenvalues with probability given by the Born rule: in the simplest case the eigenvalue is non-degenerate and the probability is given by , where is its associated eigenvector. More generally, the eigenvalue is degenerate and the probability is given by , where is the projector onto its associated eigenspace. In the continuous case, these formulas give instead the probability density.
After the measurement, if result was obtained, the quantum state is postulated to collapse to , in the non-degenerate case, or to , in the general case. The probabilistic nature of quantum mechanics thus stems from the act of measurement. This is one of the most difficult aspects of quantum systems to understand. It was the central topic in the famous Bohr–Einstein debates, in which the two scientists attempted to clarify these fundamental principles by way of thought experiments. In the decades after the formulation of quantum mechanics, the question of what constitutes a "measurement" has been extensively studied. Newer interpretations of quantum mechanics have been formulated that do away with the concept of "wave function collapse" (see, for example, the many-worlds interpretation). The basic idea is that when a quantum system interacts with a measuring apparatus, their respective wave functions become entangled so that the original quantum system ceases to exist as an independent entity (see Measurement in quantum mechanics).
Time evolution of a quantum state
The time evolution of a quantum state is described by the Schrödinger equation:
Here denotes the Hamiltonian, the observable corresponding to the total energy of the system, and is the reduced Planck constant. The constant is introduced so that the Hamiltonian is reduced to the classical Hamiltonian in cases where the quantum system can be approximated by a classical system; the ability to make such an approximation in certain limits is called the correspondence principle.
The solution of this differential equation is given by
The operator is known as the time-evolution operator, and has the crucial property that it is unitary. This time evolution is deterministic in the sense that – given an initial quantum state – it makes a definite prediction of what the quantum state will be at any later time.
Some wave functions produce probability distributions that are independent of time, such as eigenstates of the Hamiltonian. Many systems that are treated dynamically in classical mechanics are described by such "static" wave functions. For example, a single electron in an unexcited atom is pictured classically as a particle moving in a circular trajectory around the atomic nucleus, whereas in quantum mechanics, it is described by a static wave function surrounding the nucleus. For example, the electron wave function for an unexcited hydrogen atom is a spherically symmetric function known as an s orbital (Fig. 1).
Analytic solutions of the Schrödinger equation are known for very few relatively simple model Hamiltonians including the quantum harmonic oscillator, the particle in a box, the dihydrogen cation, and the hydrogen atom. Even the helium atom – which contains just two electrons – has defied all attempts at a fully analytic treatment, admitting no solution in closed form.
However, there are techniques for finding approximate solutions. One method, called perturbation theory, uses the analytic result for a simple quantum mechanical model to create a result for a related but more complicated model by (for example) the addition of a weak potential energy. Another approximation method applies to systems for which quantum mechanics produces only small deviations from classical behavior. These deviations can then be computed based on the classical motion.
Uncertainty principle
One consequence of the basic quantum formalism is the uncertainty principle. In its most familiar form, this states that no preparation of a quantum particle can imply simultaneously precise predictions both for a measurement of its position and for a measurement of its momentum. Both position and momentum are observables, meaning that they are represented by Hermitian operators. The position operator and momentum operator do not commute, but rather satisfy the canonical commutation relation:
Given a quantum state, the Born rule lets us compute expectation values for both and , and moreover for powers of them. Defining the uncertainty for an observable by a standard deviation, we have
and likewise for the momentum:
The uncertainty principle states that
Either standard deviation can in principle be made arbitrarily small, but not both simultaneously. This inequality generalizes to arbitrary pairs of self-adjoint operators and . The commutator of these two operators is
and this provides the lower bound on the product of standard deviations:
Another consequence of the canonical commutation relation is that the position and momentum operators are Fourier transforms of each other, so that a description of an object according to its momentum is the Fourier transform of its description according to its position. The fact that dependence in momentum is the Fourier transform of the dependence in position means that the momentum operator is equivalent (up to an factor) to taking the derivative according to the position, since in Fourier analysis differentiation corresponds to multiplication in the dual space. This is why in quantum equations in position space, the momentum is replaced by , and in particular in the non-relativistic Schrödinger equation in position space the momentum-squared term is replaced with a Laplacian times .
Composite systems and entanglement
When two different quantum systems are considered together, the Hilbert space of the combined system is the tensor product of the Hilbert spaces of the two components. For example, let and be two quantum systems, with Hilbert spaces and , respectively. The Hilbert space of the composite system is then
If the state for the first system is the vector and the state for the second system is , then the state of the composite system is
Not all states in the joint Hilbert space can be written in this form, however, because the superposition principle implies that linear combinations of these "separable" or "product states" are also valid. For example, if and are both possible states for system , and likewise and are both possible states for system , then
is a valid joint state that is not separable. States that are not separable are called entangled.
If the state for a composite system is entangled, it is impossible to describe either component system or system by a state vector. One can instead define reduced density matrices that describe the statistics that can be obtained by making measurements on either component system alone. This necessarily causes a loss of information, though: knowing the reduced density matrices of the individual systems is not enough to reconstruct the state of the composite system. Just as density matrices specify the state of a subsystem of a larger system, analogously, positive operator-valued measures (POVMs) describe the effect on a subsystem of a measurement performed on a larger system. POVMs are extensively used in quantum information theory.
As described above, entanglement is a key feature of models of measurement processes in which an apparatus becomes entangled with the system being measured. Systems interacting with the environment in which they reside generally become entangled with that environment, a phenomenon known as quantum decoherence. This can explain why, in practice, quantum effects are difficult to observe in systems larger than microscopic.
Equivalence between formulations
There are many mathematically equivalent formulations of quantum mechanics. One of the oldest and most common is the "transformation theory" proposed by Paul Dirac, which unifies and generalizes the two earliest formulations of quantum mechanics – matrix mechanics (invented by Werner Heisenberg) and wave mechanics (invented by Erwin Schrödinger). An alternative formulation of quantum mechanics is Feynman's path integral formulation, in which a quantum-mechanical amplitude is considered as a sum over all possible classical and non-classical paths between the initial and final states. This is the quantum-mechanical counterpart of the action principle in classical mechanics.
Symmetries and conservation laws
The Hamiltonian is known as the generator of time evolution, since it defines a unitary time-evolution operator for each value of . From this relation between and , it follows that any observable that commutes with will be conserved: its expectation value will not change over time. This statement generalizes, as mathematically, any Hermitian operator can generate a family of unitary operators parameterized by a variable . Under the evolution generated by , any observable that commutes with will be conserved. Moreover, if is conserved by evolution under , then is conserved under the evolution generated by . This implies a quantum version of the result proven by Emmy Noether in classical (Lagrangian) mechanics: for every differentiable symmetry of a Hamiltonian, there exists a corresponding conservation law.
Examples
Free particle
The simplest example of a quantum system with a position degree of freedom is a free particle in a single spatial dimension. A free particle is one which is not subject to external influences, so that its Hamiltonian consists only of its kinetic energy:
The general solution of the Schrödinger equation is given by
which is a superposition of all possible plane waves , which are eigenstates of the momentum operator with momentum . The coefficients of the superposition are , which is the Fourier transform of the initial quantum state .
It is not possible for the solution to be a single momentum eigenstate, or a single position eigenstate, as these are not normalizable quantum states. Instead, we can consider a Gaussian wave packet:
which has Fourier transform, and therefore momentum distribution
We see that as we make smaller the spread in position gets smaller, but the spread in momentum gets larger. Conversely, by making larger we make the spread in momentum smaller, but the spread in position gets larger. This illustrates the uncertainty principle.
As we let the Gaussian wave packet evolve in time, we see that its center moves through space at a constant velocity (like a classical particle with no forces acting on it). However, the wave packet will also spread out as time progresses, which means that the position becomes more and more uncertain. The uncertainty in momentum, however, stays constant.
Particle in a box
The particle in a one-dimensional potential energy box is the most mathematically simple example where restraints lead to the quantization of energy levels. The box is defined as having zero potential energy everywhere inside a certain region, and therefore infinite potential energy everywhere outside that region. For the one-dimensional case in the direction, the time-independent Schrödinger equation may be written
With the differential operator defined by
the previous equation is evocative of the classic kinetic energy analogue,
with state in this case having energy coincident with the kinetic energy of the particle.
The general solutions of the Schrödinger equation for the particle in a box are
or, from Euler's formula,
The infinite potential walls of the box determine the values of and at and where must be zero. Thus, at ,
and . At ,
in which cannot be zero as this would conflict with the postulate that has norm 1. Therefore, since , must be an integer multiple of ,
This constraint on implies a constraint on the energy levels, yielding
A finite potential well is the generalization of the infinite potential well problem to potential wells having finite depth. The finite potential well problem is mathematically more complicated than the infinite particle-in-a-box problem as the wave function is not pinned to zero at the walls of the well. Instead, the wave function must satisfy more complicated mathematical boundary conditions as it is nonzero in regions outside the well. Another related problem is that of the rectangular potential barrier, which furnishes a model for the quantum tunneling effect that plays an important role in the performance of modern technologies such as flash memory and scanning tunneling microscopy.
Harmonic oscillator
As in the classical case, the potential for the quantum harmonic oscillator is given by
This problem can either be treated by directly solving the Schrödinger equation, which is not trivial, or by using the more elegant "ladder method" first proposed by Paul Dirac. The eigenstates are given by
where Hn are the Hermite polynomials
and the corresponding energy levels are
This is another example illustrating the discretization of energy for bound states.
Mach–Zehnder interferometer
The Mach–Zehnder interferometer (MZI) illustrates the concepts of superposition and interference with linear algebra in dimension 2, rather than differential equations. It can be seen as a simplified version of the double-slit experiment, but it is of interest in its own right, for example in the delayed choice quantum eraser, the Elitzur–Vaidman bomb tester, and in studies of quantum entanglement.
We can model a photon going through the interferometer by considering that at each point it can be in a superposition of only two paths: the "lower" path which starts from the left, goes straight through both beam splitters, and ends at the top, and the "upper" path which starts from the bottom, goes straight through both beam splitters, and ends at the right. The quantum state of the photon is therefore a vector that is a superposition of the "lower" path and the "upper" path , that is, for complex . In order to respect the postulate that we require that .
Both beam splitters are modelled as the unitary matrix , which means that when a photon meets the beam splitter it will either stay on the same path with a probability amplitude of , or be reflected to the other path with a probability amplitude of . The phase shifter on the upper arm is modelled as the unitary matrix , which means that if the photon is on the "upper" path it will gain a relative phase of , and it will stay unchanged if it is in the lower path.
A photon that enters the interferometer from the left will then be acted upon with a beam splitter , a phase shifter , and another beam splitter , and so end up in the state
and the probabilities that it will be detected at the right or at the top are given respectively by
One can therefore use the Mach–Zehnder interferometer to estimate the phase shift by estimating these probabilities.
It is interesting to consider what would happen if the photon were definitely in either the "lower" or "upper" paths between the beam splitters. This can be accomplished by blocking one of the paths, or equivalently by removing the first beam splitter (and feeding the photon from the left or the bottom, as desired). In both cases, there will be no interference between the paths anymore, and the probabilities are given by , independently of the phase . From this we can conclude that the photon does not take one path or another after the first beam splitter, but rather that it is in a genuine quantum superposition of the two paths.
Applications
Quantum mechanics has had enormous success in explaining many of the features of our universe, with regard to small-scale and discrete quantities and interactions which cannot be explained by classical methods. Quantum mechanics is often the only theory that can reveal the individual behaviors of the subatomic particles that make up all forms of matter (electrons, protons, neutrons, photons, and others). Solid-state physics and materials science are dependent upon quantum mechanics.
In many aspects, modern technology operates at a scale where quantum effects are significant. Important applications of quantum theory include quantum chemistry, quantum optics, quantum computing, superconducting magnets, light-emitting diodes, the optical amplifier and the laser, the transistor and semiconductors such as the microprocessor, medical and research imaging such as magnetic resonance imaging and electron microscopy. Explanations for many biological and physical phenomena are rooted in the nature of the chemical bond, most notably the macro-molecule DNA.
Relation to other scientific theories
Classical mechanics
The rules of quantum mechanics assert that the state space of a system is a Hilbert space and that observables of the system are Hermitian operators acting on vectors in that space – although they do not tell us which Hilbert space or which operators. These can be chosen appropriately in order to obtain a quantitative description of a quantum system, a necessary step in making physical predictions. An important guide for making these choices is the correspondence principle, a heuristic which states that the predictions of quantum mechanics reduce to those of classical mechanics in the regime of large quantum numbers. One can also start from an established classical model of a particular system, and then try to guess the underlying quantum model that would give rise to the classical model in the correspondence limit. This approach is known as quantization.
When quantum mechanics was originally formulated, it was applied to models whose correspondence limit was non-relativistic classical mechanics. For instance, the well-known model of the quantum harmonic oscillator uses an explicitly non-relativistic expression for the kinetic energy of the oscillator, and is thus a quantum version of the classical harmonic oscillator.
Complications arise with chaotic systems, which do not have good quantum numbers, and quantum chaos studies the relationship between classical and quantum descriptions in these systems.
Quantum decoherence is a mechanism through which quantum systems lose coherence, and thus become incapable of displaying many typically quantum effects: quantum superpositions become simply probabilistic mixtures, and quantum entanglement becomes simply classical correlations. Quantum coherence is not typically evident at macroscopic scales, though at temperatures approaching absolute zero quantum behavior may manifest macroscopically.
Many macroscopic properties of a classical system are a direct consequence of the quantum behavior of its parts. For example, the stability of bulk matter (consisting of atoms and molecules which would quickly collapse under electric forces alone), the rigidity of solids, and the mechanical, thermal, chemical, optical and magnetic properties of matter are all results of the interaction of electric charges under the rules of quantum mechanics.
Special relativity and electrodynamics
Early attempts to merge quantum mechanics with special relativity involved the replacement of the Schrödinger equation with a covariant equation such as the Klein–Gordon equation or the Dirac equation. While these theories were successful in explaining many experimental results, they had certain unsatisfactory qualities stemming from their neglect of the relativistic creation and annihilation of particles. A fully relativistic quantum theory required the development of quantum field theory, which applies quantization to a field (rather than a fixed set of particles). The first complete quantum field theory, quantum electrodynamics, provides a fully quantum description of the electromagnetic interaction. Quantum electrodynamics is, along with general relativity, one of the most accurate physical theories ever devised.
The full apparatus of quantum field theory is often unnecessary for describing electrodynamic systems. A simpler approach, one that has been used since the inception of quantum mechanics, is to treat charged particles as quantum mechanical objects being acted on by a classical electromagnetic field. For example, the elementary quantum model of the hydrogen atom describes the electric field of the hydrogen atom using a classical Coulomb potential. Likewise, in a Stern–Gerlach experiment, a charged particle is modeled as a quantum system, while the background magnetic field is described classically. This "semi-classical" approach fails if quantum fluctuations in the electromagnetic field play an important role, such as in the emission of photons by charged particles.
Quantum field theories for the strong nuclear force and the weak nuclear force have also been developed. The quantum field theory of the strong nuclear force is called quantum chromodynamics, and describes the interactions of subnuclear particles such as quarks and gluons. The weak nuclear force and the electromagnetic force were unified, in their quantized forms, into a single quantum field theory (known as electroweak theory), by the physicists Abdus Salam, Sheldon Glashow and Steven Weinberg.
Relation to general relativity
Even though the predictions of both quantum theory and general relativity have been supported by rigorous and repeated empirical evidence, their abstract formalisms contradict each other and they have proven extremely difficult to incorporate into one consistent, cohesive model. Gravity is negligible in many areas of particle physics, so that unification between general relativity and quantum mechanics is not an urgent issue in those particular applications. However, the lack of a correct theory of quantum gravity is an important issue in physical cosmology and the search by physicists for an elegant "Theory of Everything" (TOE). Consequently, resolving the inconsistencies between both theories has been a major goal of 20th- and 21st-century physics. This TOE would combine not only the models of subatomic physics but also derive the four fundamental forces of nature from a single force or phenomenon.
One proposal for doing so is string theory, which posits that the point-like particles of particle physics are replaced by one-dimensional objects called strings. String theory describes how these strings propagate through space and interact with each other. On distance scales larger than the string scale, a string looks just like an ordinary particle, with its mass, charge, and other properties determined by the vibrational state of the string. In string theory, one of the many vibrational states of the string corresponds to the graviton, a quantum mechanical particle that carries gravitational force.
Another popular theory is loop quantum gravity (LQG), which describes quantum properties of gravity and is thus a theory of quantum spacetime. LQG is an attempt to merge and adapt standard quantum mechanics and standard general relativity. This theory describes space as an extremely fine fabric "woven" of finite loops called spin networks. The evolution of a spin network over time is called a spin foam. The characteristic length scale of a spin foam is the Planck length, approximately 1.616×10−35 m, and so lengths shorter than the Planck length are not physically meaningful in LQG.
Philosophical implications
Since its inception, the many counter-intuitive aspects and results of quantum mechanics have provoked strong philosophical debates and many interpretations. The arguments centre on the probabilistic nature of quantum mechanics, the difficulties with wavefunction collapse and the related measurement problem, and quantum nonlocality. Perhaps the only consensus that exists about these issues is that there is no consensus. Richard Feynman once said, "I think I can safely say that nobody understands quantum mechanics." According to Steven Weinberg, "There is now in my opinion no entirely satisfactory interpretation of quantum mechanics."
The views of Niels Bohr, Werner Heisenberg and other physicists are often grouped together as the "Copenhagen interpretation". According to these views, the probabilistic nature of quantum mechanics is not a temporary feature which will eventually be replaced by a deterministic theory, but is instead a final renunciation of the classical idea of "causality". Bohr in particular emphasized that any well-defined application of the quantum mechanical formalism must always make reference to the experimental arrangement, due to the complementary nature of evidence obtained under different experimental situations. Copenhagen-type interpretations were adopted by Nobel laureates in quantum physics, including Bohr, Heisenberg, Schrödinger, Feynman, and Zeilinger as well as 21st-century researchers in quantum foundations.
Albert Einstein, himself one of the founders of quantum theory, was troubled by its apparent failure to respect some cherished metaphysical principles, such as determinism and locality. Einstein's long-running exchanges with Bohr about the meaning and status of quantum mechanics are now known as the Bohr–Einstein debates. Einstein believed that underlying quantum mechanics must be a theory that explicitly forbids action at a distance. He argued that quantum mechanics was incomplete, a theory that was valid but not fundamental, analogous to how thermodynamics is valid, but the fundamental theory behind it is statistical mechanics. In 1935, Einstein and his collaborators Boris Podolsky and Nathan Rosen published an argument that the principle of locality implies the incompleteness of quantum mechanics, a thought experiment later termed the Einstein–Podolsky–Rosen paradox. In 1964, John Bell showed that EPR's principle of locality, together with determinism, was actually incompatible with quantum mechanics: they implied constraints on the correlations produced by distance systems, now known as Bell inequalities, that can be violated by entangled particles. Since then several experiments have been performed to obtain these correlations, with the result that they do in fact violate Bell inequalities, and thus falsify the conjunction of locality with determinism.
Bohmian mechanics shows that it is possible to reformulate quantum mechanics to make it deterministic, at the price of making it explicitly nonlocal. It attributes not only a wave function to a physical system, but in addition a real position, that evolves deterministically under a nonlocal guiding equation. The evolution of a physical system is given at all times by the Schrödinger equation together with the guiding equation; there is never a collapse of the wave function. This solves the measurement problem.
Everett's many-worlds interpretation, formulated in 1956, holds that all the possibilities described by quantum theory simultaneously occur in a multiverse composed of mostly independent parallel universes. This is a consequence of removing the axiom of the collapse of the wave packet. All possible states of the measured system and the measuring apparatus, together with the observer, are present in a real physical quantum superposition. While the multiverse is deterministic, we perceive non-deterministic behavior governed by probabilities, because we do not observe the multiverse as a whole, but only one parallel universe at a time. Exactly how this is supposed to work has been the subject of much debate. Several attempts have been made to make sense of this and derive the Born rule, with no consensus on whether they have been successful.
Relational quantum mechanics appeared in the late 1990s as a modern derivative of Copenhagen-type ideas, and QBism was developed some years later.
History
Quantum mechanics was developed in the early decades of the 20th century, driven by the need to explain phenomena that, in some cases, had been observed in earlier times. Scientific inquiry into the wave nature of light began in the 17th and 18th centuries, when scientists such as Robert Hooke, Christiaan Huygens and Leonhard Euler proposed a wave theory of light based on experimental observations. In 1803 English polymath Thomas Young described the famous double-slit experiment. This experiment played a major role in the general acceptance of the wave theory of light.
During the early 19th century, chemical research by John Dalton and Amedeo Avogadro lent weight to the atomic theory of matter, an idea that James Clerk Maxwell, Ludwig Boltzmann and others built upon to establish the kinetic theory of gases. The successes of kinetic theory gave further credence to the idea that matter is composed of atoms, yet the theory also had shortcomings that would only be resolved by the development of quantum mechanics. While the early conception of atoms from Greek philosophy had been that they were indivisible unitsthe word "atom" deriving from the Greek for "uncuttable" the 19th century saw the formulation of hypotheses about subatomic structure. One important discovery in that regard was Michael Faraday's 1838 observation of a glow caused by an electrical discharge inside a glass tube containing gas at low pressure. Julius Plücker, Johann Wilhelm Hittorf and Eugen Goldstein carried on and improved upon Faraday's work, leading to the identification of cathode rays, which J. J. Thomson found to consist of subatomic particles that would be called electrons.
The black-body radiation problem was discovered by Gustav Kirchhoff in 1859. In 1900, Max Planck proposed the hypothesis that energy is radiated and absorbed in discrete "quanta" (or energy packets), yielding a calculation that precisely matched the observed patterns of black-body radiation. The word quantum derives from the Latin, meaning "how great" or "how much". According to Planck, quantities of energy could be thought of as divided into "elements" whose size (E) would be proportional to their frequency (ν):
,
where h is the Planck constant. Planck cautiously insisted that this was only an aspect of the processes of absorption and emission of radiation and was not the physical reality of the radiation. In fact, he considered his quantum hypothesis a mathematical trick to get the right answer rather than a sizable discovery. However, in 1905 Albert Einstein interpreted Planck's quantum hypothesis realistically and used it to explain the photoelectric effect, in which shining light on certain materials can eject electrons from the material. Niels Bohr then developed Planck's ideas about radiation into a model of the hydrogen atom that successfully predicted the spectral lines of hydrogen. Einstein further developed this idea to show that an electromagnetic wave such as light could also be described as a particle (later called the photon), with a discrete amount of energy that depends on its frequency. In his paper "On the Quantum Theory of Radiation", Einstein expanded on the interaction between energy and matter to explain the absorption and emission of energy by atoms. Although overshadowed at the time by his general theory of relativity, this paper articulated the mechanism underlying the stimulated emission of radiation, which became the basis of the laser.
This phase is known as the old quantum theory. Never complete or self-consistent, the old quantum theory was rather a set of heuristic corrections to classical mechanics. The theory is now understood as a semi-classical approximation to modern quantum mechanics. Notable results from this period include, in addition to the work of Planck, Einstein and Bohr mentioned above, Einstein and Peter Debye's work on the specific heat of solids, Bohr and Hendrika Johanna van Leeuwen's proof that classical physics cannot account for diamagnetism, and Arnold Sommerfeld's extension of the Bohr model to include special-relativistic effects.
In the mid-1920s quantum mechanics was developed to become the standard formulation for atomic physics. In 1923, the French physicist Louis de Broglie put forward his theory of matter waves by stating that particles can exhibit wave characteristics and vice versa. Building on de Broglie's approach, modern quantum mechanics was born in 1925, when the German physicists Werner Heisenberg, Max Born, and Pascual Jordan developed matrix mechanics and the Austrian physicist Erwin Schrödinger invented wave mechanics. Born introduced the probabilistic interpretation of Schrödinger's wave function in July 1926. Thus, the entire field of quantum physics emerged, leading to its wider acceptance at the Fifth Solvay Conference in 1927.
By 1930, quantum mechanics had been further unified and formalized by David Hilbert, Paul Dirac and John von Neumann with greater emphasis on measurement, the statistical nature of our knowledge of reality, and philosophical speculation about the 'observer'. It has since permeated many disciplines, including quantum chemistry, quantum electronics, quantum optics, and quantum information science. It also provides a useful framework for many features of the modern periodic table of elements, and describes the behaviors of atoms during chemical bonding and the flow of electrons in computer semiconductors, and therefore plays a crucial role in many modern technologies. While quantum mechanics was constructed to describe the world of the very small, it is also needed to explain some macroscopic phenomena such as superconductors and superfluids.
See also
Bra–ket notation
Einstein's thought experiments
List of textbooks on classical and quantum mechanics
Macroscopic quantum phenomena
Phase-space formulation
Regularization (physics)
Two-state quantum system
Explanatory notes
References
Further reading
The following titles, all by working physicists, attempt to communicate quantum theory to lay people, using a minimum of technical apparatus.
Chester, Marvin (1987). Primer of Quantum Mechanics. John Wiley.
Richard Feynman, 1985. QED: The Strange Theory of Light and Matter, Princeton University Press. . Four elementary lectures on quantum electrodynamics and quantum field theory, yet containing many insights for the expert.
Ghirardi, GianCarlo, 2004. Sneaking a Look at God's Cards, Gerald Malsbary, trans. Princeton Univ. Press. The most technical of the works cited here. Passages using algebra, trigonometry, and bra–ket notation can be passed over on a first reading.
N. David Mermin, 1990, "Spooky actions at a distance: mysteries of the QT" in his Boojums All the Way Through. Cambridge University Press: 110–76.
Victor Stenger, 2000. Timeless Reality: Symmetry, Simplicity, and Multiple Universes. Buffalo, NY: Prometheus Books. Chpts. 5–8. Includes cosmological and philosophical considerations.
More technical:
Bryce DeWitt, R. Neill Graham, eds., 1973. The Many-Worlds Interpretation of Quantum Mechanics, Princeton Series in Physics, Princeton University Press.
D. Greenberger, K. Hentschel, F. Weinert, eds., 2009. Compendium of quantum physics, Concepts, experiments, history and philosophy, Springer-Verlag, Berlin, Heidelberg. Short articles on many QM topics.
A standard undergraduate text.
Max Jammer, 1966. The Conceptual Development of Quantum Mechanics. McGraw Hill.
Hagen Kleinert, 2004. Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets, 3rd ed. Singapore: World Scientific. Draft of 4th edition.
Online copy
Gunther Ludwig, 1968. Wave Mechanics. London: Pergamon Press.
George Mackey (2004). The mathematical foundations of quantum mechanics. Dover Publications. .
Albert Messiah, 1966. Quantum Mechanics (Vol. I), English translation from French by G.M. Temmer. North Holland, John Wiley & Sons. Cf. chpt. IV, section III. online
Scerri, Eric R., 2006. The Periodic Table: Its Story and Its Significance. Oxford University Press. Considers the extent to which chemistry and the periodic system have been reduced to quantum mechanics.
Veltman, Martinus J.G. (2003), Facts and Mysteries in Elementary Particle Physics.
On Wikibooks
This Quantum World
External links
J. O'Connor and E. F. Robertson: A history of quantum mechanics.
Introduction to Quantum Theory at Quantiki.
Quantum Physics Made Relatively Simple: three video lectures by Hans Bethe.
Course material
Quantum Cook Book and PHYS 201: Fundamentals of Physics II by Ramamurti Shankar, Yale OpenCourseware.
Modern Physics: With waves, thermodynamics, and optics – an online textbook.
MIT OpenCourseWare: Chemistry and Physics. See 8.04, 8.05 and 8.06.
Examples in Quantum Mechanics.
Imperial College Quantum Mechanics Course.
Philosophy | 0.772672 | 0.99985 | 0.772557 |
Data modeling | Data modeling in software engineering is the process of creating a data model for an information system by applying certain formal techniques. It may be applied as part of broader Model-driven engineering (MDE) concept.
Overview
Data modeling is a process used to define and analyze data requirements needed to support the business processes within the scope of corresponding information systems in organizations. Therefore, the process of data modeling involves professional data modelers working closely with business stakeholders, as well as potential users of the information system.
There are three different types of data models produced while progressing from requirements to the actual database to be used for the information system. The data requirements are initially recorded as a conceptual data model which is essentially a set of technology independent specifications about the data and is used to discuss initial requirements with the business stakeholders. The conceptual model is then translated into a logical data model, which documents structures of the data that can be implemented in databases. Implementation of one conceptual data model may require multiple logical data models. The last step in data modeling is transforming the logical data model to a physical data model that organizes the data into tables, and accounts for access, performance and storage details. Data modeling defines not just data elements, but also their structures and the relationships between them.
Data modeling techniques and methodologies are used to model data in a standard, consistent, predictable manner in order to manage it as a resource. The use of data modeling standards is strongly recommended for all projects requiring a standard means of defining and analyzing data within an organization, e.g., using data modeling:
to assist business analysts, programmers, testers, manual writers, IT package selectors, engineers, managers, related organizations and clients to understand and use an agreed upon semi-formal model that encompasses the concepts of the organization and how they relate to one another
to manage data as a resource
to integrate information systems
to design databases/data warehouses (aka data repositories)
Data modeling may be performed during various types of projects and in multiple phases of projects. Data models are progressive; there is no such thing as the final data model for a business or application. Instead a data model should be considered a living document that will change in response to a changing business. The data models should ideally be stored in a repository so that they can be retrieved, expanded, and edited over time. Whitten et al. (2004) determined two types of data modeling:
Strategic data modeling: This is part of the creation of an information systems strategy, which defines an overall vision and architecture for information systems. Information technology engineering is a methodology that embraces this approach.
Data modeling during systems analysis: In systems analysis logical data models are created as part of the development of new databases.
Data modeling is also used as a technique for detailing business requirements for specific databases. It is sometimes called database modeling because a data model is eventually implemented in a database.
Topics
Data models
Data models provide a framework for data to be used within information systems by providing specific definitions and formats. If a data model is used consistently across systems then compatibility of data can be achieved. If the same data structures are used to store and access data then different applications can share data seamlessly. The results of this are indicated in the diagram. However, systems and interfaces are often expensive to build, operate, and maintain. They may also constrain the business rather than support it. This may occur when the quality of the data models implemented in systems and interfaces is poor.
Some common problems found in data models are:
Business rules, specific to how things are done in a particular place, are often fixed in the structure of a data model. This means that small changes in the way business is conducted lead to large changes in computer systems and interfaces. So, business rules need to be implemented in a flexible way that does not result in complicated dependencies, rather the data model should be flexible enough so that changes in the business can be implemented within the data model in a relatively quick and efficient way.
Entity types are often not identified, or are identified incorrectly. This can lead to replication of data, data structure and functionality, together with the attendant costs of that duplication in development and maintenance. Therefore, data definitions should be made as explicit and easy to understand as possible to minimize misinterpretation and duplication.
Data models for different systems are arbitrarily different. The result of this is that complex interfaces are required between systems that share data. These interfaces can account for between 25 and 70% of the cost of current systems. Required interfaces should be considered inherently while designing a data model, as a data model on its own would not be usable without interfaces within different systems.
Data cannot be shared electronically with customers and suppliers, because the structure and meaning of data have not been standardised. To obtain optimal value from an implemented data model, it is very important to define standards that will ensure that data models will both meet business needs and be consistent.
Conceptual, logical and physical schemas
In 1975 ANSI described three kinds of data-model instance:
Conceptual schema: describes the semantics of a domain (the scope of the model). For example, it may be a model of the interest area of an organization or of an industry. This consists of entity classes, representing kinds of things of significance in the domain, and relationships assertions about associations between pairs of entity classes. A conceptual schema specifies the kinds of facts or propositions that can be expressed using the model. In that sense, it defines the allowed expressions in an artificial "language" with a scope that is limited by the scope of the model. Simply described, a conceptual schema is the first step in organizing the data requirements.
Logical schema: describes the structure of some domain of information. This consists of descriptions of (for example) tables, columns, object-oriented classes, and XML tags. The logical schema and conceptual schema are sometimes implemented as one and the same.
Physical schema: describes the physical means used to store data. This is concerned with partitions, CPUs, tablespaces, and the like.
According to ANSI, this approach allows the three perspectives to be relatively independent of each other. Storage technology can change without affecting either the logical or the conceptual schema. The table/column structure can change without (necessarily) affecting the conceptual schema. In each case, of course, the structures must remain consistent across all schemas of the same data model.
Data modeling process
In the context of business process integration (see figure), data modeling complements business process modeling, and ultimately results in database generation.
The process of designing a database involves producing the previously described three types of schemas – conceptual, logical, and physical. The database design documented in these schemas is converted through a Data Definition Language, which can then be used to generate a database. A fully attributed data model contains detailed attributes (descriptions) for every entity within it. The term "database design" can describe many different parts of the design of an overall database system. Principally, and most correctly, it can be thought of as the logical design of the base data structures used to store the data. In the relational model these are the tables and views. In an object database the entities and relationships map directly to object classes and named relationships. However, the term "database design" could also be used to apply to the overall process of designing, not just the base data structures, but also the forms and queries used as part of the overall database application within the Database Management System or DBMS.
In the process, system interfaces account for 25% to 70% of the development and support costs of current systems. The primary reason for this cost is that these systems do not share a common data model. If data models are developed on a system by system basis, then not only is the same analysis repeated in overlapping areas, but further analysis must be performed to create the interfaces between them. Most systems within an organization contain the same basic data, redeveloped for a specific purpose. Therefore, an efficiently designed basic data model can minimize rework with minimal modifications for the purposes of different systems within the organization
Modeling methodologies
Data models represent information areas of interest. While there are many ways to create data models, according to Len Silverston (1997) only two modeling methodologies stand out, top-down and bottom-up:
Bottom-up models or View Integration models are often the result of a reengineering effort. They usually start with existing data structures forms, fields on application screens, or reports. These models are usually physical, application-specific, and incomplete from an enterprise perspective. They may not promote data sharing, especially if they are built without reference to other parts of the organization.
Top-down logical data models, on the other hand, are created in an abstract way by getting information from people who know the subject area. A system may not implement all the entities in a logical model, but the model serves as a reference point or template.
Sometimes models are created in a mixture of the two methods: by considering the data needs and structure of an application and by consistently referencing a subject-area model. In many environments the distinction between a logical data model and a physical data model is blurred. In addition, some CASE tools don't make a distinction between logical and physical data models.
Entity–relationship diagrams
There are several notations for data modeling. The actual model is frequently called "entity–relationship model", because it depicts data in terms of the entities and relationships described in the data. An entity–relationship model (ERM) is an abstract conceptual representation of structured data. Entity–relationship modeling is a relational schema database modeling method, used in software engineering to produce a type of conceptual data model (or semantic data model) of a system, often a relational database, and its requirements in a top-down fashion.
These models are being used in the first stage of information system design during the requirements analysis to describe information needs or the type of information that is to be stored in a database. The data modeling technique can be used to describe any ontology (i.e. an overview and classifications of used terms and their relationships) for a certain universe of discourse i.e. area of interest.
Several techniques have been developed for the design of data models. While these methodologies guide data modelers in their work, two different people using the same methodology will often come up with very different results. Most notable are:
Bachman diagrams
Barker's notation
Chen's notation
Data Vault Modeling
Extended Backus–Naur form
IDEF1X
Object-relational mapping
Object-Role Modeling and Fully Communication Oriented Information Modeling
Relational Model
Relational Model/Tasmania
Generic data modeling
Generic data models are generalizations of conventional data models. They define standardized general relation types, together with the kinds of things that may be related by such a relation type.
The definition of generic data model is similar to the definition of a natural language. For example, a generic data model may define relation types such as a 'classification relation', being a binary relation between an individual thing and a kind of thing (a class) and a 'part-whole relation', being a binary relation between two things, one with the role of part, the other with the role of whole, regardless the kind of things that are related.
Given an extensible list of classes, this allows the classification of any individual thing and to specify part-whole relations for any individual object. By standardization of an extensible list of relation types, a generic data model enables the expression of an unlimited number of kinds of facts and will approach the capabilities of natural languages. Conventional data models, on the other hand, have a fixed and limited domain scope, because the instantiation (usage) of such a model only allows expressions of kinds of facts that are predefined in the model.
Semantic data modeling
The logical data structure of a DBMS, whether hierarchical, network, or relational, cannot totally satisfy the requirements for a conceptual definition of data because it is limited in scope and biased toward the implementation strategy employed by the DBMS. That is unless the semantic data model is implemented in the database on purpose, a choice which may slightly impact performance but generally vastly improves productivity.
Therefore, the need to define data from a conceptual view has led to the development of semantic data modeling techniques. That is, techniques to define the meaning of data within the context of its interrelationships with other data. As illustrated in the figure the real world, in terms of resources, ideas, events, etc., is symbolically defined by its description within physical data stores. A semantic data model is an abstraction which defines how the stored symbols relate to the real world. Thus, the model must be a true representation of the real world.
The purpose of semantic data modeling is to create a structural model of a piece of the real world, called "universe of discourse". For this, three fundamental structural relations are considered:
Classification/instantiation: Objects with some structural similarity are described as instances of classes
Aggregation/decomposition: Composed objects are obtained joining its parts
Generalization/specialization: Distinct classes with some common properties are reconsidered in a more generic class with the common attributes
A semantic data model can be used to serve many purposes, such as:
Planning of data resources
Building of shareable databases
Evaluation of vendor software
Integration of existing databases
The overall goal of semantic data models is to capture more meaning of data by integrating relational concepts with more powerful abstraction concepts known from the artificial intelligence field. The idea is to provide high level modeling primitives as integral part of a data model in order to facilitate the representation of real world situations.
See also
Architectural pattern
Comparison of data modeling tools
Data (computer science)
Data dictionary
Document modeling
Enterprise data modelling
Entity Data Model
Information management
Information model
Building information modeling
Metadata modeling
Three-schema approach
Zachman Framework
References
Further reading
J.H. ter Bekke (1991). Semantic Data Modeling in Relational Environments
John Vincent Carlis, Joseph D. Maguire (2001). Mastering Data Modeling: A User-driven Approach.
Alan Chmura, J. Mark Heumann (2005). Logical Data Modeling: What it is and how to Do it.
Martin E. Modell (1992). Data Analysis, Data Modeling, and Classification.
M. Papazoglou, Stefano Spaccapietra, Zahir Tari (2000). Advances in Object-oriented Data Modeling.
G. Lawrence Sanders (1995). Data Modeling
Graeme C. Simsion, Graham C. Witt (2005). Data Modeling Essentials'''
Matthew West (2011) Developing High Quality Data Models''
External links
Agile/Evolutionary Data Modeling
Data modeling articles
Database Modelling in UML
Data Modeling 101
Semantic data modeling
System Development, Methodologies and Modeling Notes on by Tony Drewry
Request For Proposal - Information Management Metamodel (IMM) of the Object Management Group
Data Modeling is NOT just for DBMS's Part 1 Chris Bradley
Data Modeling is NOT just for DBMS's Part 2 Chris Bradley | 0.776504 | 0.99488 | 0.772529 |
Gradualism | Gradualism, from the Latin ("step"), is a hypothesis, a theory or a tenet assuming that change comes about gradually or that variation is gradual in nature and happens over time as opposed to in large steps. Uniformitarianism, incrementalism, and reformism are similar concepts.
Gradualism can also refer to desired, controlled change in society, institutions, or policies. For example, social democrats and democratic socialists see the socialist society as achieved through gradualism.
Geology and biology
In the natural sciences, gradualism is the theory which holds that profound change is the cumulative product of slow but continuous processes, often contrasted with catastrophism. The theory was proposed in 1795 by James Hutton, a Scottish geologist, and was later incorporated into Charles Lyell's theory of uniformitarianism. Tenets from both theories were applied to biology and formed the basis of early evolutionary theory.
Charles Darwin was influenced by Lyell's Principles of Geology, which explained both uniformitarian methodology and theory. Using uniformitarianism, which states that one cannot make an appeal to any force or phenomenon which cannot presently be observed (see catastrophism), Darwin theorized that the evolutionary process must occur gradually, not in saltations, since saltations are not presently observed, and extreme deviations from the usual phenotypic variation would be more likely to be selected against.
Gradualism is often confused with the concept of phyletic gradualism. It is a term coined by Stephen Jay Gould and Niles Eldredge to contrast with their model of punctuated equilibrium, which is gradualist itself, but argues that most evolution is marked by long periods of evolutionary stability (called stasis), which is punctuated by rare instances of branching evolution.
Phyletic gradualism is a model of evolution which theorizes that most speciation is slow, uniform and gradual. When evolution occurs in this mode, it is usually by the steady transformation of a whole species into a new one (through a process called anagenesis). In this view no clear line of demarcation exists between an ancestral species and a descendant species, unless splitting occurs.
Punctuated gradualism is a microevolutionary hypothesis that refers to a species that has "relative stasis over a considerable part of its total duration [and] underwent periodic, relatively rapid, morphologic change that did not lead to lineage branching". It is one of the three common models of evolution. While the traditional model of palaeontology, the phylogenetic model, states that features evolved slowly without any direct association with speciation, the relatively newer and more controversial idea of punctuated equilibrium claims that major evolutionary changes do not happen over a gradual period but in localized, rare, rapid events of branching speciation. Punctuated gradualism is considered to be a variation of these models, lying somewhere in between the phyletic gradualism model and the punctuated equilibrium model. It states that speciation is not needed for a lineage to rapidly evolve from one equilibrium to another but may show rapid transitions between long-stable states.
Politics and society
In politics, gradualism is the hypothesis that social change can be achieved in small, discrete increments rather than in abrupt strokes such as revolutions or uprisings. Gradualism is one of the defining features of political liberalism and reformism. Machiavellian politics pushes politicians to espouse gradualism.
Gradualism in social change implemented through reformist means is a moral principle to which the Fabian Society is committed. In a more general way, reformism is the assumption that gradual changes through and within existing institutions can ultimately change a society's fundamental economic system and political structures; and that an accumulation of reforms can lead to the emergence of an entirely different economic system and form of society than present-day capitalism. That hypothesis of social change grew out of opposition to revolutionary socialism, which contends that revolution is necessary for fundamental structural changes to occur.
In socialist politics and within the socialist movement, the concept of gradualism is frequently distinguished from reformism, with the former insisting that short-term goals need to be formulated and implemented in such a way that they inevitably lead into long-term goals. It is most commonly associated with the libertarian socialist concept of dual power and is seen as a middle way between reformism and revolutionism.
Martin Luther King Jr. was opposed to the idea of gradualism as a method of eliminating segregation. The United States government wanted to try to integrate African-Americans and European-Americans slowly into the same society, but many believed it was a way for the government to put off actually doing anything about racial segregation:
Conspiracy theories
In the terminology of NWO-related speculations, gradualism refers to the gradual implementation of a totalitarian world government.
Linguistics and language change
In linguistics, language change is seen as gradual, the product of chain reactions and subject to cyclic drift. The view that creole languages are the product of catastrophism is heavily disputed.
Morality
Christianity
Buddhism, Theravada and Yoga
Gradualism is the approach of certain schools of Buddhism and other Eastern philosophies (e.g. Theravada or Yoga), that enlightenment can be achieved step by step, through an arduous practice. The opposite approach, that insight is attained all at once, is called subitism. The debate on the issue was very important to the history of the development of Zen, which rejected gradualism, and to the establishment of the opposite approach within the Tibetan Buddhism, after the Debate of Samye. It was continued in other schools of Indian and Chinese philosophy.
Philosophy
Contradictorial gradualism is the paraconsistent treatment of fuzziness developed by Lorenzo Peña which regards true contradictions as situations wherein a state of affairs enjoys only partial existence.
See also
Evolution
Uniformitarianism
Incrementalism
Normalization (sociology)
Reformism
Catastrophism
Saltation
Punctuated equilibrium
Accelerationism
Boiling frog
References
Geology theories
Rate of evolution
Liberalism
Social democracy
Democratic socialism
Historical linguistics
Social theories | 0.784322 | 0.984947 | 0.772515 |
Phosphorus cycle | The phosphorus cycle is the biogeochemical cycle that involves the movement of phosphorus through the lithosphere, hydrosphere, and biosphere. Unlike many other biogeochemical cycles, the atmosphere does not play a significant role in the movement of phosphorus, because phosphorus and phosphorus-based materials do not enter the gaseous phase readily, as the main source of gaseous phosphorus, phosphine, is only produced in isolated and specific conditions. Therefore, the phosphorus cycle is primarily examined studying the movement of orthophosphate (PO4)3-, the form of phosphorus that is most commonly seen in the environment, through terrestrial and aquatic ecosystems.
Living organisms require phosphorus, a vital component of DNA, RNA, ATP, etc., for their proper functioning. Phosphorus also enters in the composition of phospholipids present in cell membranes. Plants assimilate phosphorus as phosphate and incorporate it into organic compounds. In animals, inorganic phosphorus in the form of apatite is also a key component of bones, teeth (tooth enamel), etc. On the land, phosphorus gradually becomes less available to plants over thousands of years, since it is slowly lost in runoff. Low concentration of phosphorus in soils reduces plant growth and slows soil microbial growth, as shown in studies of soil microbial biomass. Soil microorganisms act as both sinks and sources of available phosphorus in the biogeochemical cycle. Short-term transformation of phosphorus is chemical, biological, or microbiological. In the long-term global cycle, however, the major transfer is driven by tectonic movement over geologic time and weathering of phosphate containing rock such as apatite. Furthermore, phosphorus tends to be a limiting nutrient in aquatic ecosystems. However, as phosphorus enters aquatic ecosystems, it has the possibility to lead to over-production in the form of eutrophication, which can happen in both freshwater and saltwater environments.
Human activities have caused major changes to the global phosphorus cycle primarily through the mining and subsequent transformation of phosphorus minerals for use in fertilizer and industrial products. Some phosphorus is also lost as effluent through the mining and industrial processes as well.
Phosphorus in the environment
Ecological function
Phosphorus is an essential nutrient for plants and animals. Phosphorus is a limiting nutrient for aquatic organisms. Phosphorus forms parts of important life-sustaining molecules that are very common in the biosphere. Phosphorus does enter the atmosphere in very small amounts when dust containing phosphorus is dissolved in rainwater and sea spray, but the element mainly remains on land and in rock and soil minerals. Phosphates which are found in fertilizers, sewage and detergents, can cause pollution in lakes and streams. Over-enrichment of phosphate in both fresh and inshore marine waters can lead to massive algae blooms. In fresh water, the death and decay of these blooms leads to eutrophication. An example of this is the Canadian Experimental Lakes Area.
Freshwater algal blooms are generally caused by excess phosphorus, while those that take place in saltwater tend to occur when excess nitrogen is added. However, it is possible for eutrophication to be due to a spike in phosphorus content in both freshwater and saltwater environments.
Phosphorus occurs most abundantly in nature as part of the orthophosphate ion (PO4)3−, consisting of a P atom and 4 oxygen atoms. On land most phosphorus is found in rocks and minerals. Phosphorus-rich deposits have generally formed in the ocean or from guano, and over time, geologic processes bring ocean sediments to land. Weathering of rocks and minerals release phosphorus in a soluble form where it is taken up by plants, and it is transformed into organic compounds. The plants may then be consumed by herbivores and the phosphorus is either incorporated into their tissues or excreted. After death, the animal or plant decays, and phosphorus is returned to the soil where a large part of the phosphorus is transformed into insoluble compounds. Runoff may carry a small part of the phosphorus back to the ocean. Generally with time (thousands of years) soils become deficient in phosphorus leading to ecosystem retrogression.
Major pools in aquatic systems
There are four major pools of phosphorus in freshwater ecosystems: dissolved inorganic phosphorus (DIP), dissolved organic phosphorus (DOP), particulate inorganic phosphorus (PIP) and particulate organic phosphorus (POP). Dissolved material is defined as substances that pass through a 0.45 μm filter. DIP consists mainly of orthophosphate (PO43-) and polyphosphate, while DOP consists of DNA and phosphoproteins. Particulate matter are the substances that get caught on a 0.45 μm filter and do not pass through. POP consists of both living and dead organisms, while PIP mainly consists of hydroxyapatite, Ca5(PO4)3OH . Inorganic phosphorus comes in the form of readily soluble orthophosphate. Particulate organic phosphorus occurs in suspension in living and dead protoplasm and is insoluble. Dissolved organic phosphorus is derived from the particulate organic phosphorus by excretion and decomposition and is soluble.
Biological function
The primary biological importance of phosphates is as a component of nucleotides, which
serve as energy storage within cells (ATP) or when linked together, form the nucleic acids DNA and RNA. The double helix of our DNA is only possible because of the phosphate ester bridge that binds the helix. Besides making biomolecules, phosphorus is also found in bone and the enamel of mammalian teeth, whose strength is derived from calcium phosphate in the form of hydroxyapatite. It is also found in the exoskeleton of insects, and phospholipids (found in all biological membranes). It also functions as a buffering agent in maintaining acid base homeostasis in the human body.
Phosphorus cycling
Phosphates move quickly through plants and animals; however, the processes that move them through the soil or ocean are very slow, making the phosphorus cycle overall one of the slowest biogeochemical cycles.
The global phosphorus cycle includes four major processes:
(i) tectonic uplift and exposure of phosphorus-bearing rocks such as apatite to surface weathering;
(ii) physical erosion, and chemical and biological weathering of phosphorus-bearing rocks to provide dissolved and particulate phosphorus to soils, lakes and rivers;
(iii) riverine and subsurface transportation of phosphorus to various lakes and run-off to the ocean;
(iv) sedimentation of particulate phosphorus (e.g., phosphorus associated with organic matter and oxide/carbonate minerals) and eventually burial in marine sediments (this process can also occur in lakes and rivers).
In terrestrial systems, bioavailable P (‘reactive P’) mainly comes from weathering of phosphorus-containing rocks. The most abundant primary phosphorus-mineral in the crust is apatite, which can be dissolved by natural acids generated by soil microbes and fungi, or by other chemical weathering reactions and physical erosion. The dissolved phosphorus is bioavailable to terrestrial organisms and plants and is returned to the soil after their decay. Phosphorus retention by soil minerals (e.g., adsorption onto iron and aluminum oxyhydroxides in acidic soils and precipitation onto calcite in neutral-to-calcareous soils) is usually viewed as the most important process in controlling terrestrial P-bioavailability in the mineral soil. This process can lead to the low level of dissolved phosphorus concentrations in soil solution. Various physiological strategies are used by plants and microorganisms for obtaining phosphorus from this low level of phosphorus concentration.
Soil phosphorus is usually transported to rivers and lakes and can then either be buried in lake sediments or transported to the ocean via river runoff. Atmospheric phosphorus deposition is another important marine phosphorus source to the ocean. In surface seawater, dissolved inorganic phosphorus, mainly orthophosphate (PO43-), is assimilated by phytoplankton and transformed into organic phosphorus compounds. Phytoplankton cell lysis releases cellular dissolved inorganic and organic phosphorus to the surrounding environment. Some of the organic phosphorus compounds can be hydrolyzed by enzymes synthesized by bacteria and phytoplankton and subsequently assimilated. The vast majority of phosphorus is remineralized within the water column, and approximately 1% of associated phosphorus carried to the deep sea by the falling particles is removed from the ocean reservoir by burial in sediments. A series of diagenetic processes act to enrich sediment pore water phosphorus concentrations, resulting in an appreciable benthic return flux of phosphorus to overlying bottom waters. These processes include
(i) microbial respiration of organic matter in sediments,
(ii) microbial reduction and dissolution of iron and manganese (oxyhydr)oxides with subsequent release of associated phosphorus, which connects the phosphorus cycle to the iron cycle, and
(iii) abiotic reduction of iron (oxyhydr)oxides by hydrogen sulfide and liberation of iron-associated phosphorus.
Additionally,
(iv) phosphate associated with calcium carbonate and
(v) transformation of iron oxide-bound phosphorus to vivianite play critical roles in phosphorus burial in marine sediments.
These processes are similar to phosphorus cycling in lakes and rivers.
Although orthophosphate (PO43-), the dominant inorganic P species in nature, is oxidation state (P5+), certain microorganisms can use phosphonate and phosphite (P3+ oxidation state) as a P source by oxidizing it to orthophosphate. Recently, rapid production and release of reduced phosphorus compounds has provided new clues about the role of reduced P as a missing link in oceanic phosphorus.
Phosphatic minerals
The availability of phosphorus in an ecosystem is restricted by its rate of release during weathering. The release of phosphorus from apatite dissolution is a key control on ecosystem productivity. The primary mineral with significant phosphorus content, apatite [Ca5(PO4)3OH] undergoes carbonation.
Little of this released phosphorus is taken up by biota, as it mainly reacts with other soil minerals. This leads to phosphorus becoming unavailable to organisms in the later stage of weathering and soil development as it will precipitate into rocks. Available phosphorus is found in a biogeochemical cycle in the upper soil profile, while phosphorus found at lower depths is primarily involved in geochemical reactions with secondary minerals. Plant growth depends on the rapid root uptake of phosphorus released from dead organic matter in the biochemical cycle. Phosphorus is limited in supply for plant growth. Phosphates move quickly through plants and animals; however, the processes that move them through the soil or ocean are very slow, making the phosphorus cycle overall one of the slowest biogeochemical cycles.
Low-molecular-weight (LMW) organic acids are found in soils. They originate from the activities of various microorganisms in soils or may be exuded from the roots of living plants. Several of those organic acids are capable of forming stable organo-metal complexes with various metal ions found in soil solutions. As a result, these processes may lead to the release of inorganic phosphorus associated with aluminum, iron, and calcium in soil minerals. The production and release of oxalic acid by mycorrhizal fungi explain their importance in maintaining and supplying phosphorus to plants.
The availability of organic phosphorus to support microbial, plant and animal growth depends on the rate of their degradation to generate free phosphate. There are various enzymes such as phosphatases, nucleases and phytase involved for the degradation. Some of the abiotic pathways in the environment studied are hydrolytic reactions and photolytic reactions. Enzymatic hydrolysis of organic phosphorus is an essential step in the biogeochemical phosphorus cycle, including the phosphorus nutrition of plants and microorganisms and the transfer of organic phosphorus from soil to bodies of water. Many organisms rely on the soil derived phosphorus for their phosphorus nutrition.
Eutrophication
Eutrophication is when waters are enriched by nutrients that lead to structural changes to the aquatic ecosystem such as algae bloom, deoxygenation, reduction of fish species. It does occur naturally, as when lakes age they become more productive due to increases in major limiting reagents such as nitrogen and phosphorus. For example, phosphorus can enter into lakes where it will accumulate in the sediments and the biosphere. It can also be recycled from the sediments and the water system allowing it to stay in the environment. Antrhopogenic effects can also cause phosphorus to flow into aquatic ecosystems as seen in drainage water and runoff from fertilized soils on agricultural land. Additionally, eroded soils, which can be caused by deforestation and urbanization, can lead to more phosphorus and nitrogen being added to these aquatic ecosystems. These all increase the amount of phosphorus that enters the cycle which has led to excessive nutrient intake in freshwater systems causing dramatic growth in algal populations. When these algae die, their putrefaction depletes the water of oxygen and can toxify the waters. Both these effects cause plant and animal death rates to increase as the plants take in and animals drink the poisonous water.
Saltwater phosphorus eutrophication
Oceanic ecosystems gather phosphorus through many sources, but it is mainly derived from weathering of rocks containing phosphorus which are then transported to the oceans in a dissolved form by river runoff. Due to a dramatic rise in mining for phosphorus, it is estimated that humans have increased the net storage of phosphorus in soil and ocean systems by 75%. This increase in phosphorus has led to more eutrophication in ocean waters as phytoplankton blooms have caused a drastic shift in anoxic conditions seen in both the Gulf of Mexico and the Baltic Sea. Some research suggests that when anoxic conditions arise from eutrophication due to excess phosphorus, this creates a positive feedback loop that releases more phosphorus from oceanic reserves, exacerbating the issue. This could possibly create a self-sustaining cycle of oceanic anoxia where the constant recovery of phosphorus keeps stabilizing the eutrophic growth. Attempts to mitigate this problem using biological approaches are being investigated. One such approach involves using phosphorus accumulating organisms such as, Candidatus accumulibacter phosphatis, which are capable of effectively storing phosphorus in the form of phosphate in marine ecosystems. Essentially, this would alter how the phosphorus cycle exists currently in marine ecosystems. Currently, there has been a major influx of phosphorus due to increased agricultural use and other industrial applications, thus these organisms could theoretically store phosphorus and hold on to it until it could be recycled in terrestrial ecosystems which would have lost this excess phosphorus due to runoff.
Wetland
Wetlands are frequently applied to solve the issue of eutrophication. Nitrate is transformed in wetlands to free nitrogen and discharged to the air. Phosphorus is adsorbed by wetland soils which are taken up by the plants. Therefore, wetlands could help to reduce the concentration of nitrogen and phosphorus to remit eutrophication. However, wetland soils can only hold a limited amount of phosphorus. To remove phosphorus continually, it is necessary to add more new soils within the wetland from remnant plant stems, leaves, root debris, and undecomposable parts of dead algae, bacteria, fungi, and invertebrates.
Human influences
Nutrients are important to the growth and survival of living organisms, and hence, are essential for development and maintenance of healthy ecosystems. Humans have greatly influenced the phosphorus cycle by mining phosphate rock. For millennia, phosphorus was primarily brought into the environment through the weathering of phosphate containing rocks, which would replenish the phosphorus normally lost to the environment through processes such as runoff, albeit on a very slow and gradual time-scale. Since the 1840s, when the technology to mine and extract phosphorus became more prevalent, approximately 110 teragrams of phosphorus has been added to the environment. This trend appears to be continuing in the future as from 1900-2022, the amount of phosphorus mined globally has increased 72-fold, with an expected annual increase of 4%. Most of this mining is done in order to produce fertilizers which can be used on a global scale. However, at the rate humans are mining, the geological system can not restore what is lost quickly enough. Thus, researchers are examining ways to better recycle phosphorus in the environment, with one promising application including the use of microorganisms. Regardless, humans have had a profound impact on the phosphorus cycle with wide-reaching implications about food security, eutrophication, and the overall availability of the nutrient.
Other human processes can have detrimental effects on the phosphorus cycle, such as the repeated application of liquid hog manure in excess to crops. The application of biosolids may also increase available phosphorus in soil. In poorly drained soils or in areas where snowmelt can cause periodic waterlogging, reducing conditions can be attained in 7–10 days. This causes a sharp increase in phosphorus concentration in solution and phosphorus can be leached. In addition, reduction of the soil causes a shift in phosphorus from resilient to more labile forms. This could eventually increase the potential for phosphorus loss. This is of particular concern for the environmentally sound management of such areas, where disposal of agricultural wastes has already become a problem. It is suggested that the water regime of soils that are to be used for organic wastes disposal is taken into account in the preparation of waste management regulations.
See also
Peak phosphorus
Planetary boundaries
Oceanic carbon cycle
References
External links
Biogeochemical cycle
Soil biology
Soil chemistry
Phosphorus | 0.775731 | 0.995842 | 0.772505 |
Moiety (chemistry) | In organic chemistry, a moiety is a part of a molecule that is given a name because it is identified as a part of other molecules as well.
Typically, the term is used to describe the larger and characteristic parts of organic molecules, and it should not be used to describe or name smaller functional groups of atoms that chemically react in similar ways in most molecules that contain them. Occasionally, a moiety may contain smaller moieties and functional groups.
A moiety that acts as a branch extending from the backbone of a hydrocarbon molecule is called a substituent or side chain, which typically can be removed from the molecule and substituted with others.
The term is also used in pharmacology, where an active moiety is the part of a molecule responsible for the physiological or pharmacological action of a drug.
Active moiety
In pharmacology, an active moiety is the part of a molecule or ion – excluding appended inactive portions – that is responsible for the physiological or pharmacological action of a drug substance. Inactive appended portions of the drug substance may include either the alcohol or acid moiety of an ester, a salt (including a salt with hydrogen or coordination bonds), or other noncovalent derivative (such as a complex, chelate, or clathrate). The parent drug may itself be an inactive prodrug and only after the active moiety is released from the parent in free form does it become active.
See also
Moiety conservation
References
Organic chemistry | 0.778327 | 0.992519 | 0.772505 |
Partition function (statistical mechanics) | In physics, a partition function describes the statistical properties of a system in thermodynamic equilibrium. Partition functions are functions of the thermodynamic state variables, such as the temperature and volume. Most of the aggregate thermodynamic variables of the system, such as the total energy, free energy, entropy, and pressure, can be expressed in terms of the partition function or its derivatives. The partition function is dimensionless.
Each partition function is constructed to represent a particular statistical ensemble (which, in turn, corresponds to a particular free energy). The most common statistical ensembles have named partition functions. The canonical partition function applies to a canonical ensemble, in which the system is allowed to exchange heat with the environment at fixed temperature, volume, and number of particles. The grand canonical partition function applies to a grand canonical ensemble, in which the system can exchange both heat and particles with the environment, at fixed temperature, volume, and chemical potential. Other types of partition functions can be defined for different circumstances; see partition function (mathematics) for generalizations. The partition function has many physical meanings, as discussed in Meaning and significance.
Canonical partition function
Definition
Initially, let us assume that a thermodynamically large system is in thermal contact with the environment, with a temperature T, and both the volume of the system and the number of constituent particles are fixed. A collection of this kind of system comprises an ensemble called a canonical ensemble. The appropriate mathematical expression for the canonical partition function depends on the degrees of freedom of the system, whether the context is classical mechanics or quantum mechanics, and whether the spectrum of states is discrete or continuous.
Classical discrete system
For a canonical ensemble that is classical and discrete, the canonical partition function is defined as
where
is the index for the microstates of the system;
is Euler's number;
is the thermodynamic beta, defined as where is the Boltzmann constant;
is the total energy of the system in the respective microstate.
The exponential factor is otherwise known as the Boltzmann factor.
Classical continuous system
In classical mechanics, the position and momentum variables of a particle can vary continuously, so the set of microstates is actually uncountable. In classical statistical mechanics, it is rather inaccurate to express the partition function as a sum of discrete terms. In this case we must describe the partition function using an integral rather than a sum. For a canonical ensemble that is classical and continuous, the canonical partition function is defined as
where
is the Planck constant;
is the thermodynamic beta, defined as ;
is the Hamiltonian of the system;
is the canonical position;
is the canonical momentum.
To make it into a dimensionless quantity, we must divide it by h, which is some quantity with units of action (usually taken to be the Planck constant).
Classical continuous system (multiple identical particles)
For a gas of identical classical noninteracting particles in three dimensions, the partition function is
where
is the Planck constant;
is the thermodynamic beta, defined as ;
is the index for the particles of the system;
is the Hamiltonian of a respective particle;
is the canonical position of the respective particle;
is the canonical momentum of the respective particle;
is shorthand notation to indicate that and are vectors in three-dimensional space.
is the classical continuous partition function of a single particle as given in the previous section.
The reason for the factorial factor N! is discussed below. The extra constant factor introduced in the denominator was introduced because, unlike the discrete form, the continuous form shown above is not dimensionless. As stated in the previous section, to make it into a dimensionless quantity, we must divide it by h3N (where h is usually taken to be the Planck constant).
Quantum mechanical discrete system
For a canonical ensemble that is quantum mechanical and discrete, the canonical partition function is defined as the trace of the Boltzmann factor:
where:
is the trace of a matrix;
is the thermodynamic beta, defined as ;
is the Hamiltonian operator.
The dimension of is the number of energy eigenstates of the system.
Quantum mechanical continuous system
For a canonical ensemble that is quantum mechanical and continuous, the canonical partition function is defined as
where:
is the Planck constant;
is the thermodynamic beta, defined as ;
is the Hamiltonian operator;
is the canonical position;
is the canonical momentum.
In systems with multiple quantum states s sharing the same energy Es, it is said that the energy levels of the system are degenerate. In the case of degenerate energy levels, we can write the partition function in terms of the contribution from energy levels (indexed by j) as follows:
where gj is the degeneracy factor, or number of quantum states s that have the same energy level defined by Ej = Es.
The above treatment applies to quantum statistical mechanics, where a physical system inside a finite-sized box will typically have a discrete set of energy eigenstates, which we can use as the states s above. In quantum mechanics, the partition function can be more formally written as a trace over the state space (which is independent of the choice of basis):
where is the quantum Hamiltonian operator. The exponential of an operator can be defined using the exponential power series.
The classical form of Z is recovered when the trace is expressed in terms of coherent states and when quantum-mechanical uncertainties in the position and momentum of a particle are regarded as negligible. Formally, using bra–ket notation, one inserts under the trace for each degree of freedom the identity:
where is a normalised Gaussian wavepacket centered at position x and momentum p. Thus
A coherent state is an approximate eigenstate of both operators and , hence also of the Hamiltonian , with errors of the size of the uncertainties. If and can be regarded as zero, the action of reduces to multiplication by the classical Hamiltonian, and reduces to the classical configuration integral.
Connection to probability theory
For simplicity, we will use the discrete form of the partition function in this section. Our results will apply equally well to the continuous form.
Consider a system S embedded into a heat bath B. Let the total energy of both systems be E. Let pi denote the probability that the system S is in a particular microstate, i, with energy Ei. According to the fundamental postulate of statistical mechanics (which states that all attainable microstates of a system are equally probable), the probability pi will be inversely proportional to the number of microstates of the total closed system (S, B) in which S is in microstate i with energy Ei. Equivalently, pi will be proportional to the number of microstates of the heat bath B with energy :
Assuming that the heat bath's internal energy is much larger than the energy of S, we can Taylor-expand to first order in Ei and use the thermodynamic relation , where here , are the entropy and temperature of the bath respectively:
Thus
Since the total probability to find the system in some microstate (the sum of all pi) must be equal to 1, we know that the constant of proportionality must be the normalization constant, and so, we can define the partition function to be this constant:
Calculating the thermodynamic total energy
In order to demonstrate the usefulness of the partition function, let us calculate the thermodynamic value of the total energy. This is simply the expected value, or ensemble average for the energy, which is the sum of the microstate energies weighted by their probabilities:
or, equivalently,
Incidentally, one should note that if the microstate energies depend on a parameter λ in the manner
then the expected value of A is
This provides us with a method for calculating the expected values of many microscopic quantities. We add the quantity artificially to the microstate energies (or, in the language of quantum mechanics, to the Hamiltonian), calculate the new partition function and expected value, and then set λ to zero in the final expression. This is analogous to the source field method used in the path integral formulation of quantum field theory.
Relation to thermodynamic variables
In this section, we will state the relationships between the partition function and the various thermodynamic parameters of the system. These results can be derived using the method of the previous section and the various thermodynamic relations.
As we have already seen, the thermodynamic energy is
The variance in the energy (or "energy fluctuation") is
The heat capacity is
In general, consider the extensive variable X and intensive variable Y where X and Y form a pair of conjugate variables. In ensembles where Y is fixed (and X is allowed to fluctuate), then the average value of X will be:
The sign will depend on the specific definitions of the variables X and Y. An example would be X = volume and Y = pressure. Additionally, the variance in X will be
In the special case of entropy, entropy is given by
where A is the Helmholtz free energy defined as , where is the total energy and S is the entropy, so that
Furthermore, the heat capacity can be expressed as
Partition functions of subsystems
Suppose a system is subdivided into N sub-systems with negligible interaction energy, that is, we can assume the particles are essentially non-interacting. If the partition functions of the sub-systems are ζ1, ζ2, ..., ζN, then the partition function of the entire system is the product of the individual partition functions:
If the sub-systems have the same physical properties, then their partition functions are equal, ζ1 = ζ2 = ... = ζ, in which case
However, there is a well-known exception to this rule. If the sub-systems are actually identical particles, in the quantum mechanical sense that they are impossible to distinguish even in principle, the total partition function must be divided by a N! (N factorial):
This is to ensure that we do not "over-count" the number of microstates. While this may seem like a strange requirement, it is actually necessary to preserve the existence of a thermodynamic limit for such systems. This is known as the Gibbs paradox.
Meaning and significance
It may not be obvious why the partition function, as we have defined it above, is an important quantity. First, consider what goes into it. The partition function is a function of the temperature T and the microstate energies E1, E2, E3, etc. The microstate energies are determined by other thermodynamic variables, such as the number of particles and the volume, as well as microscopic quantities like the mass of the constituent particles. This dependence on microscopic variables is the central point of statistical mechanics. With a model of the microscopic constituents of a system, one can calculate the microstate energies, and thus the partition function, which will then allow us to calculate all the other thermodynamic properties of the system.
The partition function can be related to thermodynamic properties because it has a very important statistical meaning. The probability Ps that the system occupies microstate s is
Thus, as shown above, the partition function plays the role of a normalizing constant (note that it does not depend on s), ensuring that the probabilities sum up to one:
This is the reason for calling Z the "partition function": it encodes how the probabilities are partitioned among the different microstates, based on their individual energies. Other partition functions for different ensembles divide up the probabilities based on other macrostate variables. As an example: the partition function for the isothermal-isobaric ensemble, the generalized Boltzmann distribution, divides up probabilities based on particle number, pressure, and temperature. The energy is replaced by the characteristic potential of that ensemble, the Gibbs Free Energy. The letter Z stands for the German word Zustandssumme, "sum over states". The usefulness of the partition function stems from the fact that the macroscopic thermodynamic quantities of a system can be related to its microscopic details through the derivatives of its partition function. Finding the partition function is also equivalent to performing a Laplace transform of the density of states function from the energy domain to the β domain, and the inverse Laplace transform of the partition function reclaims the state density function of energies.
Grand canonical partition function
We can define a grand canonical partition function for a grand canonical ensemble, which describes the statistics of a constant-volume system that can exchange both heat and particles with a reservoir. The reservoir has a constant temperature T, and a chemical potential μ.
The grand canonical partition function, denoted by , is the following sum over microstates
Here, each microstate is labelled by , and has total particle number and total energy . This partition function is closely related to the grand potential, , by the relation
This can be contrasted to the canonical partition function above, which is related instead to the Helmholtz free energy.
It is important to note that the number of microstates in the grand canonical ensemble may be much larger than in the canonical ensemble, since here we consider not only variations in energy but also in particle number. Again, the utility of the grand canonical partition function is that it is related to the probability that the system is in state :
An important application of the grand canonical ensemble is in deriving exactly the statistics of a non-interacting many-body quantum gas (Fermi–Dirac statistics for fermions, Bose–Einstein statistics for bosons), however it is much more generally applicable than that. The grand canonical ensemble may also be used to describe classical systems, or even interacting quantum gases.
The grand partition function is sometimes written (equivalently) in terms of alternate variables as
where is known as the absolute activity (or fugacity) and is the canonical partition function.
See also
Partition function (mathematics)
Partition function (quantum field theory)
Virial theorem
Widom insertion method
References
Equations of physics | 0.774813 | 0.997012 | 0.772498 |
Volatility (chemistry) | In chemistry, volatility is a material quality which describes how readily a substance vaporizes. At a given temperature and pressure, a substance with high volatility is more likely to exist as a vapour, while a substance with low volatility is more likely to be a liquid or solid. Volatility can also describe the tendency of a vapor to condense into a liquid or solid; less volatile substances will more readily condense from a vapor than highly volatile ones. Differences in volatility can be observed by comparing how fast substances within a group evaporate (or sublimate in the case of solids) when exposed to the atmosphere. A highly volatile substance such as rubbing alcohol (isopropyl alcohol) will quickly evaporate, while a substance with low volatility such as vegetable oil will remain condensed. In general, solids are much less volatile than liquids, but there are some exceptions. Solids that sublimate (change directly from solid to vapor) such as dry ice (solid carbon dioxide) or iodine can vaporize at a similar rate as some liquids under standard conditions.
Description
Volatility itself has no defined numerical value, but it is often described using vapor pressures or boiling points (for liquids). High vapor pressures indicate a high volatility, while high boiling points indicate low volatility. Vapor pressures and boiling points are often presented in tables and charts that can be used to compare chemicals of interest. Volatility data is typically found through experimentation over a range of temperatures and pressures.
Vapor pressure
Vapor pressure is a measurement of how readily a condensed phase forms a vapor at a given temperature. A substance enclosed in a sealed vessel initially at vacuum (no air inside) will quickly fill any empty space with vapor. After the system reaches equilibrium and the rate of evaporation matches the rate of condensation, the vapor pressure can be measured. Increasing the temperature increases the amount of vapor that is formed and thus the vapor pressure. In a mixture, each substance contributes to the overall vapor pressure of the mixture, with more volatile compounds making a larger contribution.
Boiling point
Boiling point is the temperature at which the vapor pressure of a liquid is equal to the surrounding pressure, causing the liquid to rapidly evaporate, or boil. It is closely related to vapor pressure, but is dependent on pressure. The normal boiling point is the boiling point at atmospheric pressure, but it can also be reported at higher and lower pressures.
Contributing factors
Intermolecular forces
An important factor influencing a substance's volatility is the strength of the interactions between its molecules. Attractive forces between molecules are what holds materials together, and materials with stronger intermolecular forces, such as most solids, are typically not very volatile. Ethanol and dimethyl ether, two chemicals with the same formula (C2H6O), have different volatilities due to the different interactions that occur between their molecules in the liquid phase: ethanol molecules are capable of hydrogen bonding while dimethyl ether molecules are not. The result in an overall stronger attractive force between the ethanol molecules, making it the less volatile substance of the two.
Molecular weight
In general, volatility tends to decrease with increasing molecular mass because larger molecules can participate in more intermolecular bonding, although other factors such as structure and polarity play a significant role. The effect of molecular mass can be partially isolated by comparing chemicals of similar structure (i.e. esters, alkanes, etc.). For instance, linear alkanes exhibit decreasing volatility as the number of carbons in the chain increases.
Applications
Distillation
Knowledge of volatility is often useful in the separation of components from a mixture. When a mixture of condensed substances contains multiple substances with different levels of volatility, its temperature and pressure can be manipulated such that the more volatile components change to a vapor while the less volatile substances remain in the liquid or solid phase. The newly formed vapor can then be discarded or condensed into a separate container. When the vapors are collected, this process is known as distillation.
The process of petroleum refinement utilizes a technique known as fractional distillation, which allows several chemicals of varying volatility to be separated in a single step. Crude oil entering a refinery is composed of many useful chemicals that need to be separated. The crude oil flows into a distillation tower and is heated up, which allows the more volatile components such as butane and kerosene to vaporize. These vapors move up the tower and eventually come in contact with cold surfaces, which causes them to condense and be collected. The most volatile chemical condense at the top of the column while the least volatile chemicals to vaporize condense in the lowest portion. On the right is a picture illustrating the design of a distillation tower.
The difference in volatility between water and ethanol has traditionally been used in the refinement of drinking alcohol. In order to increase the concentration of ethanol in the product, alcohol makers would heat the initial alcohol mixture to a temperature where most of the ethanol vaporizes while most of the water remains liquid. The ethanol vapor is then collected and condensed in a separate container, resulting in a much more concentrated product.
Perfume
Volatility is an important consideration when crafting perfumes. Humans detect odors when aromatic vapors come in contact with receptors in the nose. Ingredients that vaporize quickly after being applied will produce fragrant vapors for a short time before the oils evaporate. Slow-evaporating ingredients can stay on the skin for weeks or even months, but may not produce enough vapors to produce a strong aroma. To prevent these problems, perfume designers carefully consider the volatility of essential oils and other ingredients in their perfumes. Appropriate evaporation rates are achieved by modifying the amount of highly volatile and non-volatile ingredients used.
See also
Clausius–Clapeyron relation
Distillation
Fractional distillation
Partial pressure
Raoult's law
Relative volatility
Vapor–liquid equilibrium
Volatile organic compound
References
External links
Volatility from ilpi.com
Definition of volatile from Wiktionary
Physical chemistry
Chemical properties
Thermodynamic properties
Engineering thermodynamics
Phase transitions
Gases | 0.777425 | 0.993659 | 0.772495 |
Experiment | An experiment is a procedure carried out to support or refute a hypothesis, or determine the efficacy or likelihood of something previously untried. Experiments provide insight into cause-and-effect by demonstrating what outcome occurs when a particular factor is manipulated. Experiments vary greatly in goal and scale but always rely on repeatable procedure and logical analysis of the results. There also exist natural experimental studies.
A child may carry out basic experiments to understand how things fall to the ground, while teams of scientists may take years of systematic investigation to advance their understanding of a phenomenon. Experiments and other types of hands-on activities are very important to student learning in the science classroom. Experiments can raise test scores and help a student become more engaged and interested in the material they are learning, especially when used over time. Experiments can vary from personal and informal natural comparisons (e.g. tasting a range of chocolates to find a favorite), to highly controlled (e.g. tests requiring complex apparatus overseen by many scientists that hope to discover information about subatomic particles). Uses of experiments vary considerably between the natural and human sciences.
Experiments typically include controls, which are designed to minimize the effects of variables other than the single independent variable. This increases the reliability of the results, often through a comparison between control measurements and the other measurements. Scientific controls are a part of the scientific method. Ideally, all variables in an experiment are controlled (accounted for by the control measurements) and none are uncontrolled. In such an experiment, if all controls work as expected, it is possible to conclude that the experiment works as intended, and that results are due to the effect of the tested variables.
Overview
In the scientific method, an experiment is an empirical procedure that arbitrates competing models or hypotheses. Researchers also use experimentation to test existing theories or new hypotheses to support or disprove them.
An experiment usually tests a hypothesis, which is an expectation about how a particular process or phenomenon works. However, an experiment may also aim to answer a "what-if" question, without a specific expectation about what the experiment reveals, or to confirm prior results. If an experiment is carefully conducted, the results usually either support or disprove the hypothesis. According to some philosophies of science, an experiment can never "prove" a hypothesis, it can only add support. On the other hand, an experiment that provides a counterexample can disprove a theory or hypothesis, but a theory can always be salvaged by appropriate ad hoc modifications at the expense of simplicity.
An experiment must also control the possible confounding factors—any factors that would mar the accuracy or repeatability of the experiment or the ability to interpret the results. Confounding is commonly eliminated through scientific controls and/or, in randomized experiments, through random assignment.
In engineering and the physical sciences, experiments are a primary component of the scientific method. They are used to test theories and hypotheses about how physical processes work under particular conditions (e.g., whether a particular engineering process can produce a desired chemical compound). Typically, experiments in these fields focus on replication of identical procedures in hopes of producing identical results in each replication. Random assignment is uncommon.
In medicine and the social sciences, the prevalence of experimental research varies widely across disciplines. When used, however, experiments typically follow the form of the clinical trial, where experimental units (usually individual human beings) are randomly assigned to a treatment or control condition where one or more outcomes are assessed. In contrast to norms in the physical sciences, the focus is typically on the average treatment effect (the difference in outcomes between the treatment and control groups) or another test statistic produced by the experiment. A single study typically does not involve replications of the experiment, but separate studies may be aggregated through systematic review and meta-analysis.
There are various differences in experimental practice in each of the branches of science. For example, agricultural research frequently uses randomized experiments (e.g., to test the comparative effectiveness of different fertilizers), while experimental economics often involves experimental tests of theorized human behaviors without relying on random assignment of individuals to treatment and control conditions.
History
One of the first methodical approaches to experiments in the modern sense is visible in the works of the Arab mathematician and scholar Ibn al-Haytham. He conducted his experiments in the field of optics—going back to optical and mathematical problems in the works of Ptolemy—by controlling his experiments due to factors such as self-criticality, reliance on visible results of the experiments as well as a criticality in terms of earlier results. He was one of the first scholars to use an inductive-experimental method for achieving results. In his Book of Optics he describes the fundamentally new approach to knowledge and research in an experimental sense:
According to his explanation, a strictly controlled test execution with a sensibility for the subjectivity and susceptibility of outcomes due to the nature of man is necessary. Furthermore, a critical view on the results and outcomes of earlier scholars is necessary:
Thus, a comparison of earlier results with the experimental results is necessary for an objective experiment—the visible results being more important. In the end, this may mean that an experimental researcher must find enough courage to discard traditional opinions or results, especially if these results are not experimental but results from a logical/ mental derivation. In this process of critical consideration, the man himself should not forget that he tends to subjective opinions—through "prejudices" and "leniency"—and thus has to be critical about his own way of building hypotheses.
Francis Bacon (1561–1626), an English philosopher and scientist active in the 17th century, became an influential supporter of experimental science in the English renaissance. He disagreed with the method of answering scientific questions by deduction—similar to Ibn al-Haytham—and described it as follows: "Having first determined the question according to his will, man then resorts to experience, and bending her to conformity with his placets, leads her about like a captive in a procession." Bacon wanted a method that relied on repeatable observations, or experiments. Notably, he first ordered the scientific method as we understand it today.
In the centuries that followed, people who applied the scientific method in different areas made important advances and discoveries. For example, Galileo Galilei (1564–1642) accurately measured time and experimented to make accurate measurements and conclusions about the speed of a falling body. Antoine Lavoisier (1743–1794), a French chemist, used experiment to describe new areas, such as combustion and biochemistry and to develop the theory of conservation of mass (matter). Louis Pasteur (1822–1895) used the scientific method to disprove the prevailing theory of spontaneous generation and to develop the germ theory of disease. Because of the importance of controlling potentially confounding variables, the use of well-designed laboratory experiments is preferred when possible.
A considerable amount of progress on the design and analysis of experiments occurred in the early 20th century, with contributions from statisticians such as Ronald Fisher (1890–1962), Jerzy Neyman (1894–1981), Oscar Kempthorne (1919–2000), Gertrude Mary Cox (1900–1978), and William Gemmell Cochran (1909–1980), among others.
Types
Experiments might be categorized according to a number of dimensions, depending upon professional norms and standards in different fields of study.
In some disciplines (e.g., psychology or political science), a 'true experiment' is a method of social research in which there are two kinds of variables. The independent variable is manipulated by the experimenter, and the dependent variable is measured. The signifying characteristic of a true experiment is that it randomly allocates the subjects to neutralize experimenter bias, and ensures, over a large number of iterations of the experiment, that it controls for all confounding factors.
Depending on the discipline, experiments can be conducted to accomplish different but not mutually exclusive goals: test theories, search for and document phenomena, develop theories, or advise policymakers. These goals also relate differently to validity concerns.
Controlled experiments
A controlled experiment often compares the results obtained from experimental samples against control samples, which are practically identical to the experimental sample except for the one aspect whose effect is being tested (the independent variable). A good example would be a drug trial. The sample or group receiving the drug would be the experimental group (treatment group); and the one receiving the placebo or regular treatment would be the control one. In many laboratory experiments it is good practice to have several replicate samples for the test being performed and have both a positive control and a negative control. The results from replicate samples can often be averaged, or if one of the replicates is obviously inconsistent with the results from the other samples, it can be discarded as being the result of an experimental error (some step of the test procedure may have been mistakenly omitted for that sample). Most often, tests are done in duplicate or triplicate. A positive control is a procedure similar to the actual experimental test but is known from previous experience to give a positive result. A negative control is known to give a negative result. The positive control confirms that the basic conditions of the experiment were able to produce a positive result, even if none of the actual experimental samples produce a positive result. The negative control demonstrates the base-line result obtained when a test does not produce a measurable positive result. Most often the value of the negative control is treated as a "background" value to subtract from the test sample results. Sometimes the positive control takes the quadrant of a standard curve.
An example that is often used in teaching laboratories is a controlled protein assay. Students might be given a fluid sample containing an unknown (to the student) amount of protein. It is their job to correctly perform a controlled experiment in which they determine the concentration of protein in the fluid sample (usually called the "unknown sample"). The teaching lab would be equipped with a protein standard solution with a known protein concentration. Students could make several positive control samples containing various dilutions of the protein standard. Negative control samples would contain all of the reagents for the protein assay but no protein. In this example, all samples are performed in duplicate. The assay is a colorimetric assay in which a spectrophotometer can measure the amount of protein in samples by detecting a colored complex formed by the interaction of protein molecules and molecules of an added dye. In the illustration, the results for the diluted test samples can be compared to the results of the standard curve (the blue line in the illustration) to estimate the amount of protein in the unknown sample.
Controlled experiments can be performed when it is difficult to exactly control all the conditions in an experiment. In this case, the experiment begins by creating two or more sample groups that are probabilistically equivalent, which means that measurements of traits should be similar among the groups and that the groups should respond in the same manner if given the same treatment. This equivalency is determined by statistical methods that take into account the amount of variation between individuals and the number of individuals in each group. In fields such as microbiology and chemistry, where there is very little variation between individuals and the group size is easily in the millions, these statistical methods are often bypassed and simply splitting a solution into equal parts is assumed to produce identical sample groups.
Once equivalent groups have been formed, the experimenter tries to treat them identically except for the one variable that he or she wishes to isolate. Human experimentation requires special safeguards against outside variables such as the placebo effect. Such experiments are generally double blind, meaning that neither the volunteer nor the researcher knows which individuals are in the control group or the experimental group until after all of the data have been collected. This ensures that any effects on the volunteer are due to the treatment itself and are not a response to the knowledge that he is being treated.
In human experiments, researchers may give a subject (person) a stimulus that the subject responds to. The goal of the experiment is to measure the response to the stimulus by a test method.
In the design of experiments, two or more "treatments" are applied to estimate the difference between the mean responses for the treatments. For example, an experiment on baking bread could estimate the difference in the responses associated with quantitative variables, such as the ratio of water to flour, and with qualitative variables, such as strains of yeast. Experimentation is the step in the scientific method that helps people decide between two or more competing explanations—or hypotheses. These hypotheses suggest reasons to explain a phenomenon or predict the results of an action. An example might be the hypothesis that "if I release this ball, it will fall to the floor": this suggestion can then be tested by carrying out the experiment of letting go of the ball, and observing the results. Formally, a hypothesis is compared against its opposite or null hypothesis ("if I release this ball, it will not fall to the floor"). The null hypothesis is that there is no explanation or predictive power of the phenomenon through the reasoning that is being investigated. Once hypotheses are defined, an experiment can be carried out and the results analysed to confirm, refute, or define the accuracy of the hypotheses.
Experiments can be also designed to estimate spillover effects onto nearby untreated units.
Natural experiments
The term "experiment" usually implies a controlled experiment, but sometimes controlled experiments are prohibitively difficult, impossible, unethical or illegal. In this case researchers resort to natural experiments or quasi-experiments. Natural experiments rely solely on observations of the variables of the system under study, rather than manipulation of just one or a few variables as occurs in controlled experiments. To the degree possible, they attempt to collect data for the system in such a way that contribution from all variables can be determined, and where the effects of variation in certain variables remain approximately constant so that the effects of other variables can be discerned. The degree to which this is possible depends on the observed correlation between explanatory variables in the observed data. When these variables are not well correlated, natural experiments can approach the power of controlled experiments. Usually, however, there is some correlation between these variables, which reduces the reliability of natural experiments relative to what could be concluded if a controlled experiment were performed. Also, because natural experiments usually take place in uncontrolled environments, variables from undetected sources are neither measured nor held constant, and these may produce illusory correlations in variables under study.
Much research in several science disciplines, including economics, human geography, archaeology, sociology, cultural anthropology, geology, paleontology, ecology, meteorology, and astronomy, relies on quasi-experiments. For example, in astronomy it is clearly impossible, when testing the hypothesis "Stars are collapsed clouds of hydrogen", to start out with a giant cloud of hydrogen, and then perform the experiment of waiting a few billion years for it to form a star. However, by observing various clouds of hydrogen in various states of collapse, and other implications of the hypothesis (for example, the presence of various spectral emissions from the light of stars), we can collect data we require to support the hypothesis. An early example of this type of experiment was the first verification in the 17th century that light does not travel from place to place instantaneously, but instead has a measurable speed. Observation of the appearance of the moons of Jupiter were slightly delayed when Jupiter was farther from Earth, as opposed to when Jupiter was closer to Earth; and this phenomenon was used to demonstrate that the difference in the time of appearance of the moons was consistent with a measurable speed.
Field experiments
Field experiments are so named to distinguish them from laboratory experiments, which enforce scientific control by testing a hypothesis in the artificial and highly controlled setting of a laboratory. Often used in the social sciences, and especially in economic analyses of education and health interventions, field experiments have the advantage that outcomes are observed in a natural setting rather than in a contrived laboratory environment. For this reason, field experiments are sometimes seen as having higher external validity than laboratory experiments. However, like natural experiments, field experiments suffer from the possibility of contamination: experimental conditions can be controlled with more precision and certainty in the lab. Yet some phenomena (e.g., voter turnout in an election) cannot be easily studied in a laboratory.
Observational studies
An observational study is used when it is impractical, unethical, cost-prohibitive (or otherwise inefficient) to fit a physical or social system into a laboratory setting, to completely control confounding factors, or to apply random assignment. It can also be used when confounding factors are either limited or known well enough to analyze the data in light of them (though this may be rare when social phenomena are under examination). For an observational science to be valid, the experimenter must know and account for confounding factors. In these situations, observational studies have value because they often suggest hypotheses that can be tested with randomized experiments or by collecting fresh data.
Fundamentally, however, observational studies are not experiments. By definition, observational studies lack the manipulation required for Baconian experiments. In addition, observational studies (e.g., in biological or social systems) often involve variables that are difficult to quantify or control. Observational studies are limited because they lack the statistical properties of randomized experiments. In a randomized experiment, the method of randomization specified in the experimental protocol guides the statistical analysis, which is usually specified also by the experimental protocol. Without a statistical model that reflects an objective randomization, the statistical analysis relies on a subjective model. Inferences from subjective models are unreliable in theory and practice. In fact, there are several cases where carefully conducted observational studies consistently give wrong results, that is, where the results of the observational studies are inconsistent and also differ from the results of experiments. For example, epidemiological studies of colon cancer consistently show beneficial correlations with broccoli consumption, while experiments find no benefit.
A particular problem with observational studies involving human subjects is the great difficulty attaining fair comparisons between treatments (or exposures), because such studies are prone to selection bias, and groups receiving different treatments (exposures) may differ greatly according to their covariates (age, height, weight, medications, exercise, nutritional status, ethnicity, family medical history, etc.). In contrast, randomization implies that for each covariate, the mean for each group is expected to be the same. For any randomized trial, some variation from the mean is expected, of course, but the randomization ensures that the experimental groups have mean values that are close, due to the central limit theorem and Markov's inequality. With inadequate randomization or low sample size, the systematic variation in covariates between the treatment groups (or exposure groups) makes it difficult to separate the effect of the treatment (exposure) from the effects of the other covariates, most of which have not been measured. The mathematical models used to analyze such data must consider each differing covariate (if measured), and results are not meaningful if a covariate is neither randomized nor included in the model.
To avoid conditions that render an experiment far less useful, physicians conducting medical trials—say for U.S. Food and Drug Administration approval—quantify and randomize the covariates that can be identified. Researchers attempt to reduce the biases of observational studies with matching methods such as propensity score matching, which require large populations of subjects and extensive information on covariates. However, propensity score matching is no longer recommended as a technique because it can increase, rather than decrease, bias. Outcomes are also quantified when possible (bone density, the amount of some cell or substance in the blood, physical strength or endurance, etc.) and not based on a subject's or a professional observer's opinion. In this way, the design of an observational study can render the results more objective and therefore, more convincing.
Ethics
By placing the distribution of the independent variable(s) under the control of the researcher, an experiment—particularly when it involves human subjects—introduces potential ethical considerations, such as balancing benefit and harm, fairly distributing interventions (e.g., treatments for a disease), and informed consent. For example, in psychology or health care, it is unethical to provide a substandard treatment to patients. Therefore, ethical review boards are supposed to stop clinical trials and other experiments unless a new treatment is believed to offer benefits as good as current best practice. It is also generally unethical (and often illegal) to conduct randomized experiments on the effects of substandard or harmful treatments, such as the effects of ingesting arsenic on human health. To understand the effects of such exposures, scientists sometimes use observational studies to understand the effects of those factors.
Even when experimental research does not directly involve human subjects, it may still present ethical concerns. For example, the nuclear bomb experiments conducted by the Manhattan Project implied the use of nuclear reactions to harm human beings even though the experiments did not directly involve any human subjects.
See also
Allegiance bias
Black box experimentation
Concept development and experimentation
Design of experiments
Experimentum crucis
Experimental physics
Experimental psychology
Empirical research
Laboratory
List of experiments
Long-term experiment
Notes
Further reading
(Excerpts)
External links
Lessons In Electric Circuits – Volume VI – Experiments
Experiment in Physics from Stanford Encyclopedia of Philosophy
Research
Design of experiments
Causal inference | 0.775475 | 0.996111 | 0.772459 |
Human science | Human science (or human sciences in the plural) studies the philosophical, biological, social, justice, and cultural aspects of human life. Human science aims to expand the understanding of the human world through a broad interdisciplinary approach. It encompasses a wide range of fields - including history, philosophy, sociology, psychology, justice studies, evolutionary biology, biochemistry, neurosciences, folkloristics, and anthropology. It is the study and interpretation of the experiences, activities, constructs, and artifacts associated with human beings. The study of human sciences attempts to expand and enlighten the human being's knowledge of its existence, its interrelationship with other species and systems, and the development of artifacts to perpetuate the human expression and thought. It is the study of human phenomena. The study of the human experience is historical and current in nature. It requires the evaluation and interpretation of the historic human experience and the analysis of current human activity to gain an understanding of human phenomena and to project the outlines of human evolution. Human science is an objective, informed critique of human existence and how it relates to reality.Underlying human science is the relationship between various humanistic modes of inquiry within fields such as history, sociology, folkloristics, anthropology, and economics and advances in such things as genetics, evolutionary biology, and the social sciences for the purpose of understanding our lives in a rapidly changing world. Its use of an empirical methodology that encompasses psychological experience in contrasts with the purely positivistic approach typical of the natural sciences which exceeds all methods not based solely on sensory observations. Modern approaches in the human sciences integrate an understanding of human structure, function on and adaptation with a broader exploration of what it means to be human. The term is also used to distinguish not only the content of a field of study from that of the natural science, but also its methodology.
Meaning of 'science'
Ambiguity and confusion regarding the usage of the terms 'science', 'empirical science', and 'scientific method' have complicated the usage of the term 'human science' with respect to human activities. The term 'science' is derived from the Latin scientia, meaning 'knowledge'. 'Science' may be appropriately used to refer to any branch of knowledge or study dealing with a body of facts or truths systematically arranged to show the operation of general laws.
However, according to positivists, the only authentic knowledge is scientific knowledge, which comes from the positive affirmation of theories through strict scientific methods the application of knowledge, or mathematics. As a result of the positivist influence, the term science is frequently employed as a synonym for empirical science. Empirical science is knowledge based on the scientific method, a systematic approach to verification of knowledge first developed for dealing with natural physical phenomena and emphasizing the importance of experience based on sensory observation. However, even with regard to the natural sciences, significant differences exist among scientists and philosophers of science with regard to what constitutes valid scientific method—for example, evolutionary biology, geology and astronomy, studying events that cannot be repeated, can use the method of historical narratives. More recently, usage of the term has been extended to the study of human social phenomena. Thus, natural and social sciences are commonly classified as science, whereas the study of classics, languages, literature, music, philosophy, history, religion, and the visual and performing arts are referred to as the humanities. Ambiguity with respect to the meaning of the term science is aggravated by the widespread use of the term formal science with reference to any one of several sciences that is predominantly concerned with abstract form that cannot be validated by physical experience through the senses, such as logic, mathematics, and the theoretical branches of computer science, information theory, and statistics.
History
The phrase 'human science' in English was used during the 17th-century scientific revolution, for example by Theophilus Gale, to draw a distinction between supernatural knowledge (divine science) and study by humans (human science). John Locke also uses 'human science' to mean knowledge produced by people, but without the distinction. By the 20th century, this latter meaning was used at the same time as 'sciences that make human beings the topic of research'.
Early development
The term "moral science" was used by David Hume (1711–1776) in his Enquiry concerning the Principles of Morals to refer to the systematic study of human nature and relationships. Hume wished to establish a "science of human nature" based upon empirical phenomena, and excluding all that does not arise from observation. Rejecting teleological, theological and metaphysical explanations, Hume sought to develop an essentially descriptive methodology; phenomena were to be precisely characterized. He emphasized the necessity of carefully explicating the cognitive content of ideas and vocabulary, relating these to their empirical roots and real-world significance.
A variety of early thinkers in the humanistic sciences took up Hume's direction. Adam Smith, for example, conceived of economics as a moral science in the Humean sense.
Later development
Partly in reaction to the establishment of positivist philosophy and the latter's Comtean intrusions into traditionally humanistic areas such as sociology, non-positivistic researchers in the humanistic sciences began to carefully but emphatically distinguish the methodological approach appropriate to these areas of study, for which the unique and distinguishing characteristics of phenomena are in the forefront (e.g., for the biographer), from that appropriate to the natural sciences, for which the ability to link phenomena into generalized groups is foremost. In this sense, Johann Gustav Droysen contrasted the humanistic science's need to comprehend the phenomena under consideration with natural science's need to explain phenomena, while Windelband coined the terms idiographic for a descriptive study of the individual nature of phenomena, and nomothetic for sciences that aim to defthe generalizing laws.
Wilhelm Dilthey brought nineteenth-century attempts to formulate a methodology appropriate to the humanistic sciences together with Hume's term "moral science", which he translated as Geisteswissenschaft - a term with no exact English equivalent. Dilthey attempted to articulate the entire range of the moral sciences in a comprehensive and systematic way. Meanwhile, his conception of “Geisteswissenschaften” encompasses also the abovementioned study of classics, languages, literature, music, philosophy, history, religion, and the visual and performing arts. He characterized the scientific nature of a study as depending upon:
The conviction that perception gives access to reality
The self-evident nature of logical reasoning
The principle of sufficient reason
But the specific nature of the Geisteswissenschaften is based on the "inner" experience (Erleben), the "comprehension" (Verstehen) of the meaning of expressions and "understanding" in terms of the relations of the part and the whole – in contrast to the Naturwissenschaften, the "explanation" of phenomena by hypothetical laws in the "natural sciences".
Edmund Husserl, a student of Franz Brentano, articulated his phenomenological philosophy in a way that could be thought as a bthesis of Dilthey's attempt. Dilthey appreciated Husserl's Logische Untersuchungen (1900/1901, the first draft of Husserl's Phenomenology) as an “ep"epoch-making"istemological foundation of fors conception of Geisteswissenschaften.
In recent years, 'human science' has been used to refer to "a philosophy and approach to science that seeks to understand human experience in deeply subjective, personal, historical, contextual, cross-cultural, political, and spiritual terms. Human science is the science of qualities rather than of quantities and closes the subject-object split in science. In particular, it addresses the ways in which self-reflection, art, music, poetry, drama, language and imagery reveal the human condition. By being interpretive, reflective, and appreciative, human science re-opens the conversation among science, art, and philosophy."
Objective vs. subjective experiences
Since Auguste Comte, the positivistic social sciences have sought to imitate the approach of the natural sciences by emphasizing the importance of objective external observations and searching for universal laws whose operation is predicated on external initial conditions that do not take into account differences in subjective human perception and attitude. Critics argue that subjective human experience and intention plays such a central role in determining human social behavior that an objective approach to the social sciences is too confining. Rejecting the positivist influence, they argue that the scientific method can rightly be applied to subjective, as well as objective, experience. The term subjective is used in this context to refer to inner psychological experience rather than outer sensory experience. It is not used in the sense of being prejudiced by personal motives or beliefs.
Human science in universities
Since 1878, the University of Cambridge has been home to the Moral Sciences Club, with strong ties to analytic philosophy.
The Human Science degree is relatively young. It has been a degree subject at Oxford since 1969. At University College London, it was proposed in 1973 by Professor J. Z. Young and implemented two years later. His aim was to train general science graduates who would be scientifically literate, numerate and easily able to communicate across a wide range of disciplines, replacing the traditional classical training for higher-level government and management careers. Central topics include the evolution of humans, their behavior, molecular and population genetics, population growth and aging, ethnic and cultural diversity ,and human interaction with the environment, including conservation, disease ,and nutrition. The study of both biological and social disciplines, integrated within a framework of human diversity and sustainability, should enable the human scientist to develop professional competencies suited to address such multidimensional human problems.
In the United Kingdom, Human Science is offered at the degree level at several institutions which include:
University of Oxford
University College London (as Human Sciences and as Human Sciences and Evolution)
King's College London (as Anatomy, Developmental & Human Biology)
University of Exeter
Durham University (as Health and Human Sciences)
Cardiff University (as Human and Social Sciences)
In other countries:
Osaka University
Waseda University
Tokiwa University
Senshu University
Aoyama Gakuin University (As College of Community Studies)
Kobe University
Kanagawa University
Bunkyo University
Sophia University
Ghent University (in the narrow sense, as Moral sciences, "an integrated empirical and philosophical study of values, norms and world views")
See also
History of the Human Sciences (journal)
Social science
Humanism
Humanities
References
Bibliography
Flew, A. (1986). David Hume: Philosopher of Moral Science, Basil Blackwell, Oxford
Hume, David, An Enquiry Concerning the Principles of Morals
External links
Institute for Comparative Research in Human and Social Sciences (ICR) -Japan
Human Science Lab -London
Human Science(s) across Global Academies
Marxism philosophy | 0.778758 | 0.991869 | 0.772426 |
IMRAD | In scientific writing, IMRAD or IMRaD (Introduction, Methods, Results, and Discussion) is a common organizational structure (a document format). IMRaD is the most prominent norm for the structure of a scientific journal article of the original research type.
Overview
Original research articles are typically structured in this basic order
Introduction – Why was the study undertaken? What was the research question, the tested hypothesis or the purpose of the research?
Methods – When, where, and how was the study done? What materials were used or who was included in the study groups (patients, etc.)?
Results – What answer was found to the research question; what did the study find? Was the tested hypothesis true?
Discussion – What might the answer imply and why does it matter? How does it fit in with what other researchers have found? What are the perspectives for future research?
The plot and the flow of the story of the IMRaD style of writing are explained by a 'wine glass model' or hourglass model.
Writing, compliant with IMRaD format (IMRaD writing) typically first presents "(a) the subject that positions the study from the wide perspective", "(b) outline of the study", develops through "(c) study method", and "(d) the results", and concludes with "(e) outline and conclusion of the fruit of each topics", and "(f) the meaning of the study from the wide and general point of view". Here, (a) and (b) are mentioned in the section of the "Introduction", (c) and (d) are mentioned in the section of the "Method" and "Result" respectively, and (e) and (f) are mentioned in the section of the "Discussion" or "Conclusion".
In this sense, to explain how to line up the information in IMRaD writing, the 'wine glass model' (see the pattern diagram shown in Fig.1) will be helpful (see pp 2–3 of the Hilary Glasman-deal ). As mentioned in abovementioned textbook, the scheme of 'wine glass model' has two characteristics. The first one is "top-bottom symmetric shape", and the second one is "changing width" i.e. "the top is wide and it narrows towards the middle, and then widens again as it goes down toward the bottom".
The First one, "top-bottom symmetric shape", represents the symmetry of the story development. Note the shape of the top trapezoid (representing the structure of Introduction) and the shape of the trapezoid at the bottom are reversed. This is expressing that the same subject introduced in Introduction will be taken up again in suitable formation for the section of Discussion/Conclusion in these section in the reversed order. (See the relationship between abovementioned (a), (b) and (e), (f).)
The Second one, "the change of the width" of the schema shown in Fig.1, represents the change of generality of the view point. As along the flow of the story development, when the viewpoints are more general, the width of the diagram is expressed wider, and when they are more specialized and focused, the width is expressed narrower.
As the standard format of academic journals
The IMRAD format has been adopted by a steadily increasing number of academic journals since the first half of the 20th century. The IMRAD structure has come to dominate academic writing in the sciences, most notably in empirical biomedicine. The structure of most public health journal articles reflects this trend. Although the IMRAD structure originates in the empirical sciences, it now also regularly appears in academic journals across a wide range of disciplines. Many scientific journals now not only prefer this structure but also use the IMRAD acronym as an instructional device in the instructions to their authors, recommending the use of the four terms as main headings. For example, it is explicitly recommended in the "Uniform Requirements for Manuscripts Submitted to Biomedical Journals" issued by the International Committee of Medical Journal Editors (previously called the Vancouver guidelines): The text of observational and experimental articles is usually (but not necessarily) divided into the following sections: Introduction, Methods, Results, and Discussion. This so-called "IMRAD" structure is not an arbitrary publication format but rather a direct reflection of the process of scientific discovery. Long articles may need subheadings within some sections (especially Results and Discussion) to clarify their content. Other types of articles, such as case reports, reviews, and editorials, probably need to be formatted differently.
The IMRAD structure is also recommended for empirical studies in the 6th edition of the publication manual of the American Psychological Association (APA style). The APA publication manual is widely used by journals in the social, educational and behavioral sciences.
Benefits
The IMRAD structure has proved successful because it facilitates literature review, allowing readers to navigate articles more quickly to locate material relevant to their purpose. But the neat order of IMRAD rarely corresponds to the actual sequence of events or ideas of the research presented; the IMRAD structure effectively supports a reordering that eliminates unnecessary detail, and allows the reader to assess a well-ordered and noise-free presentation of the relevant and significant information. It allows the most relevant information to be presented clearly and logically to the readership, by summarizing the research process in an ideal sequence and without unnecessary detail.
Caveats
The idealised sequence of the IMRAD structure has on occasion been criticised for being too rigid and simplistic. In a radio talk in 1964 the Nobel laureate Peter Medawar criticised this text structure for not giving a realistic representation of the thought processes of the writing scientist: "… the scientific paper may be a fraud because it misrepresents the processes of thought that accompanied or gave rise to the work that is described in the paper". Medawar's criticism was discussed at the XIXth General Assembly of the World Medical Association in 1965. While respondents may argue that it is too much to ask from such a simple instructional device to carry the burden of representing the entire process of scientific discovery, Medawar's caveat expressed his belief that many students and faculty throughout academia treat the structure as a simple panacea. Medawar and others have given testimony both to the importance and to the limitations of the device.
Abstract considerations
In addition to the scientific article itself a brief abstract is usually required for publication. The abstract should, however, be composed to function as an autonomous text, even if some authors and readers may think of it as an almost integral part of the article. The increasing importance of well-formed autonomous abstracts may well be a consequence of the increasing use of searchable digital abstract archives, where a well-formed abstract will dramatically increase the probability for an article to be found by its optimal readership. Consequently, there is a strong recent trend toward developing formal requirements for abstracts, most often structured on the IMRAD pattern, and often with strict additional specifications of topical content items that should be considered for inclusion in the abstract. Such abstracts are often referred to as structured abstracts. The growing importance of abstracts in the era of computerized literature search and information overload has led some users to modify the IMRAD acronym to AIMRAD, in order to give due emphasis to the abstract.
Heading style variations
Usually, the IMRAD article sections use the IMRAD words as headings. A few variations can occur, as follows:
Many journals have a convention of omitting the "Introduction" heading, based on the idea that the reader who begins reading an article does not need to be told that the beginning of the text is the introduction. This print-era proscription is fading since the advent of the Web era, when having an explicit "Introduction" heading helps with navigation via document maps and collapsible/expandable TOC trees. (The same considerations are true regarding the presence or proscription of an explicit "Abstract" heading.)
In some journals, the "Methods" heading may vary, being "Methods and materials", "Materials and methods", or similar phrases. Some journals mandate that exactly the same wording for this heading be used for all articles without exception; other journals reasonably accept whatever each submitted manuscript contains, as long as it is one of these sensible variants.
The "Discussion" section may subsume any "Summary", "Conclusion", or "Conclusions" section, in which case there may or may not be any explicit "Summary", "Conclusion", or "Conclusions" subheading; or the "Summary"/"Conclusion"/"Conclusions" section may be a separate section, using an explicit heading on the same heading hierarchy level as the "Discussion" heading. Which of these variants to use as the default is a matter of each journal's chosen style, as is the question of whether the default style must be forced onto every article or whether sensible inter-article flexibility will be allowed. The journals which use the "Conclusion" or "Conclusions" along with a statement about the "Aim" or "Objective" of the study in the "Introduction" is following the newly proposed acronym "IaMRDC" which stands for "Introduction with aim, Materials and Methods, Results, Discussion, and Conclusion."
Other elements that are typical although not part of the acronym
Disclosure statements (see main article at conflicts of interest in academic publishing)
Reader's theme that is the point of this element's existence: "Why should I (the reader) trust or believe what you (the author) say? Are you just making money off of saying it?"
Appear either in opening footnotes or a section of the article body
Subtypes of disclosure:
Disclosure of funding (grants to the project)
Disclosure of conflict of interest (grants to individuals, jobs/salaries, stock or stock options)
Clinical relevance statement
Reader's theme that is the point of this element's existence: "Why should I (the reader) spend my time reading what you say? How is it relevant to my clinical practice? Basic research is nice, other people's cases are nice, but my time is triaged, so make your case for 'why bother'"
Appear either as a display element (sidebar) or a section of the article body
Format: short, a few sentences or bullet points
Ethical compliance statement
Reader's theme that is the point of this element's existence: "Why should I believe that your study methods were ethical?"
"We complied with the Declaration of Helsinki."
"We got our study design approved by our local institutional review board before proceeding."
"We got our study design approved by our local ethics committee before proceeding."
"We treated our animals in accordance with our local Institutional Animal Care and Use Committee."
Diversity, equity, and inclusion statement
Reader's theme that is the point of this element's existence: "Why should I believe that your study methods consciously included people?" (for example, avoided inadvertently underrepresenting some people—participants or researchers—by race, ethnicity, sex, gender, or other factors)
"We worked to ensure that people of color and transgender people were not underrepresented among the study population."
"One or more of the authors of this paper self-identifies as living with a disability."
"One or more of the authors of this paper self-identifies as transgender."
Additional standardization (reporting guidelines)
In the late 20th century and early 21st, the scientific communities found that the communicative value of journal articles was still much less than it could be if best practices were developed, promoted, and enforced. Thus reporting guidelines (guidelines for how best to report information) arose. The general theme has been to create templates and checklists with the message to the user being, "your article is not complete until you have done all of these things." In the 1970s, the ICMJE (International Committee of Medical Journal Editors) released the Uniform Requirements for Manuscripts Submitted to Biomedical Journals (Uniform Requirements or URM). Other such standards, mostly developed in the 1990s through 2010s, are listed below. The academic medicine community is working hard on trying to raise compliance with good reporting standards, but there is still much to be done; for example, a 2016 review of instructions for authors in 27 emergency medicine journals found insufficient mention of reporting standards, and a 2018 study found that even when journals' instructions for authors mention reporting standards, there is a difference between a mention or badge and enforcing the requirements that the mention or badge represents.
The advent of a need for best practices in data sharing has expanded the scope of these efforts beyond merely the pages of the journal article itself. In fact, from the most rigorous versions of the evidence-based perspective, the distance to go is still quite formidable. FORCE11 is an international coalition that has been developing standards for how to share research data sets properly and most effectively.
Most researchers cannot be familiar with all of the many reporting standards that now exist, but it is enough to know which ones must be followed in one's own work, and to know where to look for details when needed. Several organizations provide help with this task of checking one's own compliance with the latest standards:
The EQUATOR Network
The BioSharing collaboration (biosharing.org)
Several important webpages on this topic are:
NLM's list at Research Reporting Guidelines and Initiatives: By Organization
The EQUATOR Network's list at Reporting guidelines and journals: fact & fiction
TRANSPOSE (Transparency in Scholarly Publishing for Open Scholarship Evolution), "a grassroots initiative to build a crowdsourced database of journal policies," allowing faster and easier lookup and comparison, and potentially spurring harmonization
Relatedly, SHERPA provides compliance-checking tools, and AllTrials provides a rallying point, for efforts to enforce openness and completeness of clinical trial reporting. These efforts stand against publication bias and against excessive corporate influence on scientific integrity.
See also
Case report
Case series
Eight-legged essay
Five paragraph essay
IRAC
Journal Article Tag Suite (JATS)
Literature review
Meta-analyses
Schaffer paragraph
References
Writing
Academic publishing
Scientific documents
Technical communication
Style guides for technical and scientific writing
Academic terminology
Medical publishing | 0.77524 | 0.99636 | 0.772419 |
Saponification | Saponification is a process of cleaving esters into carboxylate salts and alcohols by the action of aqueous alkali. Typically aqueous sodium hydroxide solutions are used. It is an important type of alkaline hydrolysis. When the carboxylate is long chain, its salt is called a soap. The saponification of ethyl acetate gives sodium acetate and ethanol:
Saponification of fats
Vegetable oils and animal fats are the traditional materials that are saponified. These greasy materials, triesters called triglycerides, are usually mixtures derived from diverse fatty acids. In the traditional saponification, the triglyceride is treated with lye, which cleaves the ester bonds, releasing fatty acid salts (soaps) and glycerol. In one simplified version, the saponification of stearin gives sodium stearate.
This process is the main industrial method for producing glycerol.
Some soap-makers leave the glycerol in the soap. Others precipitate the soap by salting it out with sodium chloride.
Fat in a corpse converts into adipocere, often called "grave wax". This process is more common where the amount of fatty tissue is high and the agents of decomposition are absent or only minutely present.
Saponification value
The saponification value is the amount of base required to saponify a fat sample. Soap makers formulate their recipes with a small deficit of lye to account for the unknown deviation of saponification value between their oil batch and laboratory averages.
Mechanism of basic hydrolysis
The hydroxide anion adds to the carbonyl group of the ester. The immediate product is called an orthoester.
Expulsion of the alkoxide generates a carboxylic acid:
The alkoxide ion is a strong base so the proton is transferred from the carboxylic acid to the alkoxide ion, creating an alcohol:
In a classic laboratory procedure, the triglyceride trimyristin is obtained by extracting it from nutmeg with diethyl ether. Saponification to the soap sodium myristate takes place using NaOH in water. Treating the soap with hydrochloric acid gives myristic acid.
Saponification of fatty acids
The reaction of fatty acids with base is the other main method of saponification. In this case, the reaction involves neutralization of the carboxylic acid. The neutralization method is used to produce industrial soaps such as those derived from magnesium, the transition metals, and aluminium. This method is ideal for producing soaps that are derived from a single fatty acid, which leads to soaps with predictable physical properties, as required by many engineering applications.
Applications
Hard and soft soaps
Depending on the nature of the alkali used in their production, soaps have distinct properties. Sodium hydroxide (NaOH) produces "hard" soaps; hard soaps can also be used in water containing Mg, Cl, and Ca salts. By contrast, potassium soaps (derived using KOH) are "soft" soaps. The fatty acid source also affects the soap's melting point. Most early hard soaps were manufactured using animal fats and KOH extracted from wood ash; these were broadly solid. However, the majority of modern soaps are manufactured from polyunsaturated triglycerides such as vegetable oils. As in the triglycerides they are formed from the salts of these acids have weaker inter-molecular forces and thus lower melting points.
Lithium soaps
Lithium 12-hydroxystearate and other lithium-based fatty acids are important constituents of lubricating greases. In lithium-based greases, lithium carboxylates are thickeners. "Complex soaps" are also common, these being combinations of more than one acid salt, such as azelaic or acetic acid.
Fire extinguishers
Fires involving cooking fats and oils (classified as class K (US) or F (Australia/Europe/Asia)) burn hotter than most flammable liquids, rendering a standard class B extinguisher ineffective. Such fires should be extinguished with a wet chemical extinguisher. Extinguishers of this type are designed to extinguish cooking fats and oils through saponification. The extinguishing agent rapidly converts the burning substance to a non-combustible soap.
Oil paints
Saponification can occur in oil paintings over time, causing visible damage and deformation. Oil paints are composed of pigment molecules suspended in an oil-binding medium. Heavy metal salts are often used as pigment molecules, such as in lead white, red lead, and zinc white. If those heavy metal salts react with free fatty acids in the oil medium, metal soaps may form in a paint layer that can then migrate outward to the painting's surface.
Saponification in oil paintings was described as early as 1912. It is believed to be widespread, having been observed in many works dating from the fifteenth through the twentieth centuries; works of different geographic origin; and works painted on various supports, such as canvas, paper, wood, and copper. Chemical analysis may reveal saponification occurring in a painting's deeper layers before any signs are visible on the surface, even in paintings centuries old.
The saponified regions may deform the painting's surface through the formation of visible lumps or protrusions that can scatter light. These soap lumps may be prominent only on certain regions of the painting rather than throughout. In John Singer Sargent's famous Portrait of Madame X, for example, the lumps only appear on the blackest areas, which may be because of the artist's use of more medium in those areas to compensate for the tendency of black pigments to soak it up. The process can also form chalky white deposits on a painting's surface, a deformation often described as "blooming" or "efflorescence", and may also contribute to the increased transparency of certain paint layers within an oil painting over time.
Saponification does not occur in all oil paintings and many details are unresolved. At present, retouching is the only known restoration method.
See also
Soap
Saponification value
Ester hydrolysis
References
External links
Animation of the mechanism of base hydrolysis
Nucleophilic substitution reactions
Chemical processes
Soaps
Bases (chemistry)
Conservation and restoration of paintings
Reactions of esters | 0.775141 | 0.996435 | 0.772377 |
Theory | A theory is a rational type of abstract thinking about a phenomenon, or the results of such thinking. The process of contemplative and rational thinking is often associated with such processes as observational study or research. Theories may be scientific, belong to a non-scientific discipline, or no discipline at all. Depending on the context, a theory's assertions might, for example, include generalized explanations of how nature works. The word has its roots in ancient Greek, but in modern use it has taken on several related meanings.
In modern science, the term "theory" refers to scientific theories, a well-confirmed type of explanation of nature, made in a way consistent with the scientific method, and fulfilling the criteria required by modern science. Such theories are described in such a way that scientific tests should be able to provide empirical support for it, or empirical contradiction ("falsify") of it. Scientific theories are the most reliable, rigorous, and comprehensive form of scientific knowledge, in contrast to more common uses of the word "theory" that imply that something is unproven or speculative (which in formal terms is better characterized by the word hypothesis). Scientific theories are distinguished from hypotheses, which are individual empirically testable conjectures, and from scientific laws, which are descriptive accounts of the way nature behaves under certain conditions.
Theories guide the enterprise of finding facts rather than of reaching goals, and are neutral concerning alternatives among values. A theory can be a body of knowledge, which may or may not be associated with particular explanatory models. To theorize is to develop this body of knowledge.
The word theory or "in theory" is sometimes used outside of science to refer to something which the speaker did not experience or test before. In science, this same concept is referred to as a hypothesis, and the word "hypothetically" is used both inside and outside of science. In its usage outside of science, the word "theory" is very often contrasted to "practice" (from Greek praxis, πρᾶξις) a Greek term for doing, which is opposed to theory. A "classical example" of the distinction between "theoretical" and "practical" uses the discipline of medicine: medical theory involves trying to understand the causes and nature of health and sickness, while the practical side of medicine is trying to make people healthy. These two things are related but can be independent, because it is possible to research health and sickness without curing specific patients, and it is possible to cure a patient without knowing how the cure worked.
Ancient usage
The English word theory derives from a technical term in philosophy in Ancient Greek. As an everyday word, theoria, , meant "looking at, viewing, beholding", but in more technical contexts it came to refer to contemplative or speculative understandings of natural things, such as those of natural philosophers, as opposed to more practical ways of knowing things, like that of skilled orators or artisans. English-speakers have used the word theory since at least the late 16th century. Modern uses of the word theory derive from the original definition, but have taken on new shades of meaning, still based on the idea of a theory as a thoughtful and rational explanation of the general nature of things.
Although it has more mundane meanings in Greek, the word apparently developed special uses early in the recorded history of the Greek language. In the book From Religion to Philosophy, Francis Cornford suggests that the Orphics used the word theoria to mean "passionate sympathetic contemplation". Pythagoras changed the word to mean "the passionless contemplation of rational, unchanging truth" of mathematical knowledge, because he considered this intellectual pursuit the way to reach the highest plane of existence. Pythagoras emphasized subduing emotions and bodily desires to help the intellect function at the higher plane of theory. Thus, it was Pythagoras who gave the word theory the specific meaning that led to the classical and modern concept of a distinction between theory (as uninvolved, neutral thinking) and practice.
Aristotle's terminology, as already mentioned, contrasts theory with praxis or practice, and this contrast exists till today. For Aristotle, both practice and theory involve thinking, but the aims are different. Theoretical contemplation considers things humans do not move or change, such as nature, so it has no human aim apart from itself and the knowledge it helps create. On the other hand, praxis involves thinking, but always with an aim to desired actions, whereby humans cause change or movement themselves for their own ends. Any human movement that involves no conscious choice and thinking could not be an example of praxis or doing.
Formality
Theories are analytical tools for understanding, explaining, and making predictions about a given subject matter. There are theories in many and varied fields of study, including the arts and sciences. A formal theory is syntactic in nature and is only meaningful when given a semantic component by applying it to some content (e.g., facts and relationships of the actual historical world as it is unfolding). Theories in various fields of study are often expressed in natural language, but can be constructed in such a way that their general form is identical to a theory as it is expressed in the formal language of mathematical logic. Theories may be expressed mathematically, symbolically, or in common language, but are generally expected to follow principles of rational thought or logic.
Theory is constructed of a set of sentences that are thought to be true statements about the subject under consideration. However, the truth of any one of these statements is always relative to the whole theory. Therefore, the same statement may be true with respect to one theory, and not true with respect to another. This is, in ordinary language, where statements such as "He is a terrible person" cannot be judged as true or false without reference to some interpretation of who "He" is and for that matter what a "terrible person" is under the theory.
Sometimes two theories have exactly the same explanatory power because they make the same predictions. A pair of such theories is called indistinguishable or observationally equivalent, and the choice between them reduces to convenience or philosophical preference.
The form of theories is studied formally in mathematical logic, especially in model theory. When theories are studied in mathematics, they are usually expressed in some formal language and their statements are closed under application of certain procedures called rules of inference. A special case of this, an axiomatic theory, consists of axioms (or axiom schemata) and rules of inference. A theorem is a statement that can be derived from those axioms by application of these rules of inference. Theories used in applications are abstractions of observed phenomena and the resulting theorems provide solutions to real-world problems. Obvious examples include arithmetic (abstracting concepts of number), geometry (concepts of space), and probability (concepts of randomness and likelihood).
Gödel's incompleteness theorem shows that no consistent, recursively enumerable theory (that is, one whose theorems form a recursively enumerable set) in which the concept of natural numbers can be expressed, can include all true statements about them. As a result, some domains of knowledge cannot be formalized, accurately and completely, as mathematical theories. (Here, formalizing accurately and completely means that all true propositions—and only true propositions—are derivable within the mathematical system.) This limitation, however, in no way precludes the construction of mathematical theories that formalize large bodies of scientific knowledge.
Underdetermination
A theory is underdetermined (also called indeterminacy of data to theory) if a rival, inconsistent theory is at least as consistent with the evidence. Underdetermination is an epistemological issue about the relation of evidence to conclusions.
A theory that lacks supporting evidence is generally, more properly, referred to as a hypothesis.
Intertheoretic reduction and elimination
If a new theory better explains and predicts a phenomenon than an old theory (i.e., it has more explanatory power), we are justified in believing that the newer theory describes reality more correctly. This is called an intertheoretic reduction because the terms of the old theory can be reduced to the terms of the new one. For instance, our historical understanding about sound, light and heat have been reduced to wave compressions and rarefactions, electromagnetic waves, and molecular kinetic energy, respectively. These terms, which are identified with each other, are called intertheoretic identities. When an old and new theory are parallel in this way, we can conclude that the new one describes the same reality, only more completely.
When a new theory uses new terms that do not reduce to terms of an older theory, but rather replace them because they misrepresent reality, it is called an intertheoretic elimination. For instance, the obsolete scientific theory that put forward an understanding of heat transfer in terms of the movement of caloric fluid was eliminated when a theory of heat as energy replaced it. Also, the theory that phlogiston is a substance released from burning and rusting material was eliminated with the new understanding of the reactivity of oxygen.
Versus theorems
Theories are distinct from theorems. A theorem is derived deductively from axioms (basic assumptions) according to a formal system of rules, sometimes as an end in itself and sometimes as a first step toward being tested or applied in a concrete situation; theorems are said to be true in the sense that the conclusions of a theorem are logical consequences of the axioms. Theories are abstract and conceptual, and are supported or challenged by observations in the world. They are 'rigorously tentative', meaning that they are proposed as true and expected to satisfy careful examination to account for the possibility of faulty inference or incorrect observation. Sometimes theories are incorrect, meaning that an explicit set of observations contradicts some fundamental objection or application of the theory, but more often theories are corrected to conform to new observations, by restricting the class of phenomena the theory applies to or changing the assertions made. An example of the former is the restriction of classical mechanics to phenomena involving macroscopic length scales and particle speeds much lower than the speed of light.
Theory–practice relationship
Theory is often distinguished from practice or praxis. The question of whether theoretical models of work are relevant to work itself is of interest to scholars of professions such as medicine, engineering, law, and management.
The gap between theory and practice has been framed as a knowledge transfer where there is a task of translating research knowledge to be application in practice, and ensuring that practitioners are made aware of it. Academics have been criticized for not attempting to transfer the knowledge they produce to practitioners. Another framing supposes that theory and knowledge seek to understand different problems and model the world in different words (using different ontologies and epistemologies). Another framing says that research does not produce theory that is relevant to practice.
In the context of management, Van de Van and Johnson propose a form of engaged scholarship where scholars examine problems that occur in practice, in an interdisciplinary fashion, producing results that create both new practical results as well as new theoretical models, but targeting theoretical results shared in an academic fashion. They use a metaphor of "arbitrage" of ideas between disciplines, distinguishing it from collaboration.
Scientific
In science, the term "theory" refers to "a well-substantiated explanation of some aspect of the natural world, based on a body of facts that have been repeatedly confirmed through observation and experiment." Theories must also meet further requirements, such as the ability to make falsifiable predictions with consistent accuracy across a broad area of scientific inquiry, and production of strong evidence in favor of the theory from multiple independent sources (consilience).
The strength of a scientific theory is related to the diversity of phenomena it can explain, which is measured by its ability to make falsifiable predictions with respect to those phenomena. Theories are improved (or replaced by better theories) as more evidence is gathered, so that accuracy in prediction improves over time; this increased accuracy corresponds to an increase in scientific knowledge. Scientists use theories as a foundation to gain further scientific knowledge, as well as to accomplish goals such as inventing technology or curing diseases.
Definitions from scientific organizations
The United States National Academy of Sciences defines scientific theories as follows:The formal scientific definition of "theory" is quite different from the everyday meaning of the word. It refers to a comprehensive explanation of some aspect of nature that is supported by a vast body of evidence. Many scientific theories are so well established that no new evidence is likely to alter them substantially. For example, no new evidence will demonstrate that the Earth does not orbit around the sun (heliocentric theory), or that living things are not made of cells (cell theory), that matter is not composed of atoms, or that the surface of the Earth is not divided into solid plates that have moved over geological timescales (the theory of plate tectonics) ... One of the most useful properties of scientific theories is that they can be used to make predictions about natural events or phenomena that have not yet been observed.
From the American Association for the Advancement of Science:
A scientific theory is a well-substantiated explanation of some aspect of the natural world, based on a body of facts that have been repeatedly confirmed through observation and experiment. Such fact-supported theories are not "guesses" but reliable accounts of the real world. The theory of biological evolution is more than "just a theory." It is as factual an explanation of the universe as the atomic theory of matter or the germ theory of disease. Our understanding of gravity is still a work in progress. But the phenomenon of gravity, like evolution, is an accepted fact.
The term theory is not appropriate for describing scientific models or untested, but intricate hypotheses.
Philosophical views
The logical positivists thought of scientific theories as deductive theories—that a theory's content is based on some formal system of logic and on basic axioms. In a deductive theory, any sentence which is a logical consequence of one or more of the axioms is also a sentence of that theory. This is called the received view of theories.
In the semantic view of theories, which has largely replaced the received view, theories are viewed as scientific models. A model is a logical framework intended to represent reality (a "model of reality"), similar to the way that a map is a graphical model that represents the territory of a city or country. In this approach, theories are a specific category of models that fulfill the necessary criteria. (See Theories as models for further discussion.)
In physics
In physics the term theory is generally used for a mathematical framework—derived from a small set of basic postulates (usually symmetries, like equality of locations in space or in time, or identity of electrons, etc.)—which is capable of producing experimental predictions for a given category of physical systems. One good example is classical electromagnetism, which encompasses results derived from gauge symmetry (sometimes called gauge invariance) in a form of a few equations called Maxwell's equations. The specific mathematical aspects of classical electromagnetic theory are termed "laws of electromagnetism", reflecting the level of consistent and reproducible evidence that supports them. Within electromagnetic theory generally, there are numerous hypotheses about how electromagnetism applies to specific situations. Many of these hypotheses are already considered adequately tested, with new ones always in the making and perhaps untested.
Regarding the term "theoretical"
Certain tests may be infeasible or technically difficult. As a result, theories may make predictions that have not been confirmed or proven incorrect. These predictions may be described informally as "theoretical". They can be tested later, and if they are incorrect, this may lead to revision, invalidation, or rejection of the theory.
Mathematical
In mathematics, the term theory is used differently than its use in science ─ necessarily so, since mathematics contains no explanations of natural phenomena per se, even though it may help provide insight into natural systems or be inspired by them. In the general sense, a mathematical theory is a branch of mathematics devoted to some specific topics or methods, such as set theory, number theory, group theory, probability theory, game theory, control theory, perturbation theory, etc., such as might be appropriate for a single textbook.
In mathematical logic, a theory has a related but different sense: it is the collection of the theorems that can be deduced from a given set of axioms, given a given set of inference rules.
Philosophical
A theory can be either descriptive as in science, or prescriptive (normative) as in philosophy. The latter are those whose subject matter consists not of empirical data, but rather of ideas. At least some of the elementary theorems of a philosophical theory are statements whose truth cannot necessarily be scientifically tested through empirical observation.
A field of study is sometimes named a "theory" because its basis is some initial set of assumptions describing the field's approach to the subject. These assumptions are the elementary theorems of the particular theory, and can be thought of as the axioms of that field. Some commonly known examples include set theory and number theory; however literary theory, critical theory, and music theory are also of the same form.
Metatheory
One form of philosophical theory is a metatheory or meta-theory. A metatheory is a theory whose subject matter is some other theory or set of theories. In other words, it is a theory about theories. Statements made in the metatheory about the theory are called metatheorems.
Political
A political theory is an ethical theory about the law and government. Often the term "political theory" refers to a general view, or specific ethic, political belief or attitude, thought about politics.
Jurisprudential
In social science, jurisprudence is the philosophical theory of law. Contemporary philosophy of law addresses problems internal to law and legal systems, and problems of law as a particular social institution.
Examples
Most of the following are scientific theories. Some are not, but rather encompass a body of knowledge or art, such as Music theory and Visual Arts Theories.
Anthropology:
Carneiro's circumscription theory
Astronomy:
Alpher–Bethe–Gamow theory —
B2FH Theory —
Copernican theory —
Newton's theory of gravitation —
Hubble's law —
Kepler's laws of planetary motion Ptolemaic theory
Biology:
Cell theory —
Chemiosmotic theory —
Evolution —
Germ theory —
Symbiogenesis
Chemistry:
Molecular theory —
Kinetic theory of gases —
Molecular orbital theory —
Valence bond theory —
Transition state theory —
RRKM theory —
Chemical graph theory —
Flory–Huggins solution theory —
Marcus theory —
Lewis theory (successor to Brønsted–Lowry acid–base theory) —
HSAB theory —
Debye–Hückel theory —
Thermodynamic theory of polymer elasticity —
Reptation theory —
Polymer field theory —
Møller–Plesset perturbation theory —
density functional theory —
Frontier molecular orbital theory —
Polyhedral skeletal electron pair theory —
Baeyer strain theory —
Quantum theory of atoms in molecules —
Collision theory —
Ligand field theory (successor to Crystal field theory) —
Variational transition-state theory —
Benson group increment theory —
Specific ion interaction theory
Climatology:
Climate change theory (general study of climate changes)
anthropogenic climate change (ACC)/
anthropogenic global warming (AGW) theories (due to human activity)
Computer Science:
Automata theory —
Queueing theory
Cosmology:
Big Bang Theory —
Cosmic inflation —
Loop quantum gravity —
Superstring theory —
Supergravity —
Supersymmetric theory —
Multiverse theory —
Holographic principle —
Quantum gravity —
M-theory
Economics:
Macroeconomic theory —
Microeconomic theory —
Law of Supply and demand
Education:
Constructivist theory —
Critical pedagogy theory —
Education theory —
Multiple intelligence theory —
Progressive education theory
Engineering:
Circuit theory —
Control theory —
Signal theory —
Systems theory —
Information theory
Film:
Film theory
Geology:
Plate tectonics
Humanities:
Critical theory
Jurisprudence or 'Legal theory':
Natural law —
Legal positivism —
Legal realism —
Critical legal studies
Law: see Jurisprudence; also Case theory
Linguistics:
X-bar theory —
Government and Binding —
Principles and parameters —
Universal grammar
Literature:
Literary theory
Mathematics:
Approximation theory —
Arakelov theory —
Asymptotic theory —
Bifurcation theory —
Catastrophe theory —
Category theory —
Chaos theory —
Choquet theory —
Coding theory —
Combinatorial game theory —
Computability theory —
Computational complexity theory —
Deformation theory —
Dimension theory —
Ergodic theory —
Field theory —
Galois theory —
Game theory —
Gauge theory —
Graph theory —
Group theory —
Hodge theory —
Homology theory —
Homotopy theory —
Ideal theory —
Intersection theory —
Invariant theory —
Iwasawa theory —
K-theory —
KK-theory —
Knot theory —
L-theory —
Lie theory —
Littlewood–Paley theory —
Matrix theory —
Measure theory —
Model theory —
Module theory —
Morse theory —
Nevanlinna theory —
Number theory —
Obstruction theory —
Operator theory —
Order theory —
PCF theory —
Perturbation theory —
Potential theory —
Probability theory —
Ramsey theory —
Rational choice theory —
Representation theory —
Ring theory —
Set theory —
Shape theory —
Small cancellation theory —
Spectral theory —
Stability theory —
Stable theory —
Sturm–Liouville theory —
Surgery theory —
Twistor theory —
Yang–Mills theory
Music:
Music theory
Philosophy:
Proof theory —
Speculative reason —
Theory of truth —
Type theory —
Value theory —
Virtue theory
Physics:
Acoustic theory —
Antenna theory —
Atomic theory —
BCS theory —
Conformal field theory —
Dirac hole theory —
Dynamo theory —
Landau theory —
M-theory —
Perturbation theory —
Theory of relativity (successor to classical mechanics) —
Gauge theory —
Quantum field theory —
Scattering theory —
String theory —
Quantum information theory
Psychology:
Theory of mind —
Cognitive dissonance theory —
Attachment theory —
Object permanence —
Poverty of stimulus —
Attribution theory —
Self-fulfilling prophecy —
Stockholm syndrome
Public Budgeting:
Incrementalism —
Zero-based budgeting
Public Administration:
Organizational theory
Semiotics:
Intertheoricity –
Transferogenesis
Sociology:
Critical theory —
Engaged theory —
Social theory —
Sociological theory –
Social capital theory
Statistics:
Extreme value theory
Theatre:
Performance theory
Visual Arts:
Aesthetics —
Art educational theory —
Architecture —
Composition —
Anatomy —
Color theory —
Perspective —
Visual perception —
Geometry —
Manifolds
Other:
Obsolete scientific theories
See also
Falsifiability
Hypothesis testing
Physical law
Predictive power
Testability
Theoretical definition
Notes
References
Citations
Sources
Davidson Reynolds, Paul (1971). A primer in theory construction. Boston: Allyn and Bacon.
Guillaume, Astrid (2015). « Intertheoricity: Plasticity, Elasticity and Hybridity of Theories. Part II: Semiotics of Transferogenesis », in Human and Social studies, Vol.4, N°2 (2015), éd.Walter de Gruyter, Boston, Berlin, pp. 59–77.
Guillaume, Astrid (2015). « The Intertheoricity : Plasticity, Elasticity and Hybridity of Theories », in Human and Social studies, Vol.4, N°1 (2015), éd.Walter de Gruyter, Boston, Berlin, pp. 13–29.
Hawking, Stephen (1996). A Brief History of Time (Updated and expanded ed.). New York: Bantam Books, p. 15.
.
Popper, Karl (1963), Conjectures and Refutations, Routledge and Kegan Paul, London, UK, pp. 33–39. Reprinted in Theodore Schick (ed., 2000), Readings in the Philosophy of Science, Mayfield Publishing Company, Mountain View, California, USA, pp. 9–13.
Zima, Peter V. (2007). "What is theory? Cultural theory as discourse and dialogue". London: Continuum (translated from: Was ist Theorie? Theoriebegriff und Dialogische Theorie in der Kultur- und Sozialwissenschaften. Tübingen: A. Franke Verlag, 2004).
Further reading
Eisenhardt, K. M., & Graebner, M. E. (2007). Theory building from cases: Opportunities and challenges. Academy of management journal, 50(1), 25-32.
External links
"How science works: Even theories change", Understanding Science by the University of California Museum of Paleontology.
What is a Theory?
Abstraction
Systems
Inductive reasoning
Ontology | 0.774038 | 0.997831 | 0.772359 |
Reversible process (thermodynamics) | In thermodynamics, a reversible process is a process, involving a system and its surroundings, whose direction can be reversed by infinitesimal changes in some properties of the surroundings, such as pressure or temperature.
Throughout an entire reversible process, the system is in thermodynamic equilibrium, both physical and chemical, and nearly in pressure and temperature equilibrium with its surroundings. This prevents unbalanced forces and acceleration of moving system boundaries, which in turn avoids friction and other dissipation.
To maintain equilibrium, reversible processes are extremely slow (quasistatic). The process must occur slowly enough that after some small change in a thermodynamic parameter, the physical processes in the system have enough time for the other parameters to self-adjust to match the new, changed parameter value. For example, if a container of water has sat in a room long enough to match the steady temperature of the surrounding air, for a small change in the air temperature to be reversible, the whole system of air, water, and container must wait long enough for the container and air to settle into a new, matching temperature before the next small change can occur.
While processes in isolated systems are never reversible, cyclical processes can be reversible or irreversible. Reversible processes are hypothetical or idealized but central to the second law of thermodynamics. Melting or freezing of ice in water is an example of a realistic process that is nearly reversible.
Additionally, the system must be in (quasistatic) equilibrium with the surroundings at all time, and there must be no dissipative effects, such as friction, for a process to be considered reversible.
Reversible processes are useful in thermodynamics because they are so idealized that the equations for heat and expansion/compression work are simple. This enables the analysis of model processes, which usually define the maximum efficiency attainable in corresponding real processes. Other applications exploit that entropy and internal energy are state functions whose change depends only on the initial and final states of the system, not on how the process occurred. Therefore, the entropy and internal-energy change in a real process can be calculated quite easily by analyzing a reversible process connecting the real initial and final system states. In addition, reversibility defines the thermodynamic condition for chemical equilibrium.
Overview
Thermodynamic processes can be carried out in one of two ways: reversibly or irreversibly. An ideal thermodynamically reversible process is free of dissipative losses and therefore the magnitude of work performed by or on the system would be maximized. The incomplete conversion of heat to work in a cyclic process, however, applies to both reversible and irreversible cycles. The dependence of work on the path of the thermodynamic process is also unrelated to reversibility, since expansion work, which can be visualized on a pressure–volume diagram as the area beneath the equilibrium curve, is different for different reversible expansion processes (e.g. adiabatic, then isothermal; vs. isothermal, then adiabatic) connecting the same initial and final states.
Irreversibility
In an irreversible process, finite changes are made; therefore the system is not at equilibrium throughout the process. In a cyclic process, the difference between the reversible work and the actual work for a process as shown in the following equation:
Boundaries and states
Simple reversible processes change the state of a system in such a way that the net change in the combined entropy of the system and its surroundings is zero. (The entropy of the system alone is conserved only in reversible adiabatic processes.) Nevertheless, the Carnot cycle demonstrates that the state of the surroundings may change in a reversible process as the system returns to its initial state. Reversible processes define the boundaries of how efficient heat engines can be in thermodynamics and engineering: a reversible process is one where the machine has maximum efficiency (see Carnot cycle).
In some cases, it may be important to distinguish between reversible and quasistatic processes. Reversible processes are always quasistatic, but the converse is not always true. For example, an infinitesimal compression of a gas in a cylinder where there is friction between the piston and the cylinder is a quasistatic, but not reversible process. Although the system has been driven from its equilibrium state by only an infinitesimal amount, energy has been irreversibly lost to waste heat, due to friction, and cannot be recovered by simply moving the piston in the opposite direction by the infinitesimally same amount.
Engineering archaisms
Historically, the term Tesla principle was used to describe (among other things) certain reversible processes invented by Nikola Tesla. However, this phrase is no longer in conventional use. The principle stated that some systems could be reversed and operated in a complementary manner. It was developed during Tesla's research in alternating currents where the current's magnitude and direction varied cyclically. During a demonstration of the Tesla turbine, the disks revolved and machinery fastened to the shaft was operated by the engine. If the turbine's operation was reversed, the disks acted as a pump.
Footnotes
See also
Time reversibility
Carnot cycle
Entropy production
Toffoli gate
Time evolution
Quantum circuit
Reversible computing
Maxwell's demon
Stirling engine
References
Thermodynamic processes | 0.778218 | 0.992459 | 0.772349 |
Mixed model | A mixed model, mixed-effects model or mixed error-component model is a statistical model containing both fixed effects and random effects. These models are useful in a wide variety of disciplines in the physical, biological and social sciences.
They are particularly useful in settings where repeated measurements are made on the same statistical units (see also longitudinal study), or where measurements are made on clusters of related statistical units. Mixed models are often preferred over traditional analysis of variance regression models because they don't rely on the independent observations assumption. Further, they have their flexibility in dealing with missing values and uneven spacing of repeated measurements. The Mixed model analysis allows measurements to be explicitly modeled in a wider variety of correlation and variance-covariance avoiding biased estimations structures.
This page will discuss mainly linear mixed-effects models rather than generalized linear mixed models or nonlinear mixed-effects models.
Qualitative Description
Linear mixed models (LMMs) are statistical models that incorporate fixed and random effects to accurately represent non-independent data structures. LMM is an alternative to analysis of variance. Often, ANOVA assumes the independence of observations within each group, however, this assumption may not hold in non-independent data, such as multilevel/hierarchical, longitudinal, or correlated datasets.
Non-independent sets are ones in which the variability between outcomes is due to correlations within groups or between groups. Mixed models properly account for nest structures/hierarchical data structures where observations are influenced by their nested associations. For example, when studying education methods involving multiple schools, there are multiple levels of variables to consider. The individual level/lower level comprises individual students or teachers within the school. The observations obtained from this student/teacher is nested within their school. For example, Student A is a unit within the School A. The next higher level is the school. At the higher level, the school contains multiple individual students and teachers. The school level influences the observations obtained from the students and teachers. For Example, School A and School B are the higher levels each with its set of Student A and Student B respectively. This represents a hierarchical data scheme. A solution to modeling hierarchical data is using linear mixed models.
LMMs allow us to understand the important effects between and within levels while incorporating the corrections for standard errors for non-independence embedded in the data structure.
The Fixed Effect
Fixed effects encapsulate the tendencies/trends that are consistent at the levels of primary interest. These effects are considered fixed because they are non-random and assumed to be constant for the population being studied. For example, when studying education a fixed effect could represent overall school level effects that are consistent across all schools.
While the hierarchy of the data set is typically obvious, the specific fixed effects that affect the average responses for all subjects must be specified. Some fixed effect coefficients are sufficient without corresponding random effects where as other fixed coefficients only represent an average where the individual units are random. These may be determined by incorporating random intercepts and slopes.
In most situations, several related models are considered and the model that best represents a universal model is adopted.
The Random Effect, ε
A key component of the mixed model is the incorporation of random effects with the fixed effect. Fixed effects are often fitted to represent the underlying model. In Linear mixed models, the true regression of the population is linear, β. The fixed data is fitted at the highest level. Random effects introduce statistical variability at different levels of the data hierarchy. These account for the unmeasured sources of variance that affect certain groups in the data. For example, the differences between student 1 and student 2 in the same class, or the differences between class 1 and class 2 in the same school.
History and current status
Ronald Fisher introduced random effects models to study the correlations of trait values between relatives. In the 1950s, Charles Roy Henderson
provided best linear unbiased estimates of fixed effects and best linear unbiased predictions of random effects. Subsequently, mixed modeling has become a major area of statistical research, including work on computation of maximum likelihood estimates, non-linear mixed effects models, missing data in mixed effects models, and Bayesian estimation of mixed effects models. Mixed models are applied in many disciplines where multiple correlated measurements are made on each unit of interest. They are prominently used in research involving human and animal subjects in fields ranging from genetics to marketing, and have also been used in baseball and industrial statistics.
The mixed linear model association has improved the prevention of false positive associations. Populations are deeply interconnected and the relatedness structure of population dynamics is extremely difficult to model without the use of mixed models. Linear mixed models may not, however, be the only solution. LMM's have a constant-residual variance assumption that is sometimes violated when accounting or deeply associated continuous and binary traits.
Definition
In matrix notation a linear mixed model can be represented as
where
is a known vector of observations, with mean ;
is an unknown vector of fixed effects;
is an unknown vector of random effects, with mean and variance–covariance matrix ;
is an unknown vector of random errors, with mean and variance ;
is the known design matrix for the fixed effects relating the observations to , respectively
is the known design matrix for the random effects relating the observations to , respectively.
For example, if each observation can belong to any zero or more of categories then , which has one row per observation, can be chosen to have columns, where a value of for a matrix element of indicates that an observation is known to belong to a category and a value of indicates that an observation is known to not belong to a category. The inferred value of for a category is then a category-specific intercept. If has additional columns, where the non-zero values are instead the value of an independent variable for an observation, then the corresponding inferred value of is a category-specific slope for that independent variable. The prior distribution for the category intercepts and slopes is described by the covariance matrix .
Estimation
The joint density of and can be written as: .
Assuming normality, , and , and maximizing the joint density over and , gives Henderson's "mixed model equations" (MME) for linear mixed models:
where for example is the matrix transpose of and is the matrix inverse of .
The solutions to the MME, and are best linear unbiased estimates and predictors for and , respectively. This is a consequence of the Gauss–Markov theorem when the conditional variance of the outcome is not scalable to the identity matrix. When the conditional variance is known, then the inverse variance weighted least squares estimate is best linear unbiased estimates. However, the conditional variance is rarely, if ever, known. So it is desirable to jointly estimate the variance and weighted parameter estimates when solving MMEs.
One method used to fit such mixed models is that of the expectation–maximization algorithm (EM) where the variance components are treated as unobserved nuisance parameters in the joint likelihood. Currently, this is the method implemented in statistical software such as Python (statsmodels package) and SAS (proc mixed), and as initial step only in R's nlme package lme(). The solution to the mixed model equations is a maximum likelihood estimate when the distribution of the errors is normal.
There are several other methods to fit mixed models, including using a mixed effect model (MEM) initially, and then Newton-Raphson (used by R package nlme's lme()), penalized least squares to get a profiled log likelihood only depending on the (low-dimensional) variance-covariance parameters of , i.e., its cov matrix , and then modern direct optimization for that reduced objective function (used by R's lme4 package lmer() and the Julia package MixedModels.jl) and direct optimization of the likelihood (used by e.g. R's glmmTMB). Notably, while the canonical form proposed by Henderson is useful for theory, many popular software packages use a different formulation for numerical computation in order to take advantage of sparse matrix methods (e.g. lme4 and MixedModels.jl).
See also
Nonlinear mixed-effects model
Fixed effects model
Generalized linear mixed model
Linear regression
Mixed-design analysis of variance
Multilevel model
Random effects model
Repeated measures design
Empirical Bayes method
References
Further reading
Regression models
Analysis of variance | 0.775874 | 0.995444 | 0.772339 |
Pharmacognosy | Pharmacognosy is the study of crude drugs obtained from medicinal plants, animals, fungi, and other natural sources. The American Society of Pharmacognosy defines pharmacognosy as "the study of the physical, chemical, biochemical, and biological properties of drugs, drug substances, or potential drugs or drug substances of natural origin as well as the search for new drugs from natural sources".
Description
The word "pharmacognosy" is derived from two Greek words: , (drug), and gnosis (knowledge) or the Latin verb cognosco (, 'with', and , 'know'; itself a cognate of the Greek verb , , meaning 'I know, perceive'), meaning 'to conceptualize' or 'to recognize'.
The term "pharmacognosy" was used for the first time by the German physician Johann Adam Schmidt (1759–1809) in his published book Lehrbuch der Materia Medica in 1811, and by Anotheus Seydler in 1815, in his Analecta Pharmacognostica.
Originally—during the 19th century and the beginning of the 20th century—"pharmacognosy" was used to define the branch of medicine or commodity sciences ( in German) which deals with drugs in their crude, or unprepared form. Crude drugs are the dried, unprepared material of plant, animal or mineral origin, used for medicine. The study of these materials under the name was first developed in German-speaking areas of Europe, while other language areas often used the older term materia medica taken from the works of Galen and Dioscorides. In German, the term ("science of crude drugs") is also used synonymously.
As late as the beginning of the 20th century, the subject had developed mainly on the botanical side, being particularly concerned with the description and identification of drugs both in their whole state and in powder form. Such branches of pharmacognosy are still of fundamental importance, particularly for botanical products (widely available as dietary supplements in the U.S. and Canada), quality control purposes, pharmacopoeial protocols and related health regulatory frameworks. At the same time, development in other areas of research has enormously expanded the subject. The advent of the 21st century brought a renaissance of pharmacognosy, and its conventional botanical approach has been broadened up to molecular and metabolomic levels.
In addition to the previously mentioned definition, the American Society of Pharmacognosy defines pharmacognosy as "the study of natural product molecules (typically secondary metabolites) that are useful for their medicinal, ecological, gustatory, or other functional properties." Similarly, the mission of the Pharmacognosy Institute at the University of Illinois at Chicago involves plant-based and plant-related health products for the benefit of human health. Other definitions are more encompassing, drawing on a broad spectrum of biological subjects, including botany, ethnobotany, marine biology, microbiology, herbal medicine, chemistry, biotechnology, phytochemistry, pharmacology, pharmaceutics, clinical pharmacy, and pharmacy practice.
medical ethnobotany: the study of traditional uses of plants for medicinal purposes;
ethnopharmacology: the study of pharmacological qualities of traditional medicinal substances;
phytotherapy: the study of medicinal use of plant extracts;
phytochemistry: the study of chemicals derived from plants (including the identification of new drug candidates derived from plant sources);
zoopharmacognosy: the process by which animals self-medicate, by selecting and using plants, soils, and insects to treat and prevent disease;
marine pharmacognosy: the study of chemicals derived from marine organisms.
Biological background
All plants produce chemical compounds as part of their normal metabolic activities. These phytochemicals are divided into (1) primary metabolites such as sugars and fats, which are found in all plants; and (2) secondary metabolites—compounds which are found in a smaller range of plants, serving more specific functions. For example, some secondary metabolites are toxins used by plants to deter predation and others are pheromones used to attract insects for pollination. It is these secondary metabolites and pigments that can have therapeutic actions in humans and which can be refined to produce drugs—examples are inulin from the roots of dahlias, quinine from the cinchona, THC and CBD from the flowers of cannabis, morphine and codeine from the poppy, and digoxin from the foxglove.
Plants synthesize a variety of phytochemicals, but most are derivatives:
Alkaloids are a class of chemical compounds containing a nitrogen ring. Alkaloids are produced by a large variety of organisms, including bacteria, fungi, plants, and animals, and are part of the group of natural products (also called secondary metabolites). Many alkaloids can be purified from crude extracts by acid-base extraction. Many alkaloids are toxic to other organisms.
Polyphenols ( phenolics) are compounds that contain phenol rings. The anthocyanins that give grapes their purple color, the isoflavones, the phytoestrogens from soy and the tannins that give tea its astringency are phenolics.
Glycosides are molecules in which a sugar is bound to a non-carbohydrate moiety, usually a small organic molecule. Glycosides play numerous important roles in living organisms. Many plants store chemicals in the form of inactive glycosides. These can be activated by enzyme hydrolysis, which causes the sugar part to be broken off, making the chemical available for use.
Terpenes are a large and diverse class of organic compounds, produced by a variety of plants, particularly conifers, which are often strong smelling and thus may have a protective function. They are the major components of resins, and of turpentine produced from resins. When terpenes are modified chemically, such as by oxidation or rearrangement of the carbon skeleton, the resulting compounds are generally referred to as terpenoids. Terpenes and terpenoids are the primary constituents of the essential oils of many types of plants and flowers. Essential oils are used widely as natural flavor additives for food, as fragrances in perfumery, and in traditional and alternative medicines such as aromatherapy. Synthetic variations and derivatives of natural terpenes and terpenoids also greatly expand the variety of aromas used in perfumery and flavors used in food additives. The fragrance of rose and lavender is due to monoterpenes. The carotenoids produce shades of red, yellow and orange in pumpkin, maize, and tomatoes.
Natural products chemistry
A typical protocol to isolate a pure chemical agent from natural origin is bioassay-guided fractionation, meaning step-by-step separation of extracted components based on differences in their physicochemical properties, and assessing the biological activity, followed by next round of separation and assaying. Typically, such work is initiated after a given crude drug formulation (typically prepared by solvent extraction of the natural material) is deemed "active" in a particular in vitro assay. If the end-goal of the work at hand is to identify which one(s) of the scores or hundreds of compounds are responsible for the observed in vitro activity, the path to that end is fairly straightforward:
fractionate the crude extract, e.g. by solvent partitioning or chromatography.
test the fractions thereby generated with in vitro assays.
repeat steps 1) and 2) until pure, active compounds are obtained.
determine structure(s) of active compound(s), typically by using spectroscopic methods.
In vitro activity does not necessarily translate to biological activity in humans or other living systems.
Herbal
In the past, in some countries in Asia and Africa, up to 80% of the population may rely on traditional medicine (including herbal medicine) for primary health care. Native American cultures have also relied on traditional medicine such as ceremonial smoking of tobacco, potlatch ceremonies, and herbalism, to name a few, prior to European colonization. Knowledge of traditional medicinal practices is disappearing in indigenous communities, particularly in the Amazon.
With worldwide research into pharmacology as well as medicine, traditional medicines or ancient herbal medicines are often translated into modern remedies, such as the anti-malarial group of drugs called artemisinin isolated from Artemisia annua herb, a herb that was known in Chinese medicine to treat fever. However, it was found that its plant extracts had antimalarial activity, leading to the Nobel Prize winning discovery of artemisinin.
Microscopical evaluation
Microscopic evaluation is essential for the initial identification of herbs, identifying small fragments of crude or powdered herbs, identifying adulterants (such as insects, animal feces, mold, fungi, etc.), and recognizing the plant by its characteristic tissue features. Techniques such as microscopic linear measurements, determination of leaf constants, and quantitative microscopy are also utilized in this evaluation. The determination of leaf constants includes stomatal number, stomatal index, vein islet number, vein termination number, and palisade ratio.
The stomatal index is the percentage formed by the number of stomata divided by the total number of epidermal cells, with each stoma being counted as one cell.
where:
S.I. is the stomatal index
S is the number of stomata per unit area
E is the number of epidermal cells in the same unit area.
See also
Bioprospecting
List of plants used in herbalism
Pharmacognosy Reviews
References
External links
Pharmacy | 0.778532 | 0.992045 | 0.772339 |
Equifinality | Equifinality is the principle that in open systems a given end state can be reached by many potential means. The term and concept is due to the German Hans Driesch, the developmental biologist, later applied by the Austrian Ludwig von Bertalanffy, the founder of general systems theory, and by William T. Powers, the founder of perceptual control theory. Driesch and von Bertalanffy prefer this term, in contrast to "goal", in describing complex systems' similar or convergent behavior. Powers simply emphasised the flexibility of response, since it emphasizes that the same end state may be achieved via many different paths or trajectories.
In closed systems, a direct cause-and-effect relationship exists between the initial condition and the final state of the system: When a computer's 'on' switch is pushed, the system powers up. Open systems (such as biological and social systems), however, operate quite differently. The idea of equifinality suggests that similar results may be achieved with different initial conditions and in many different ways. This phenomenon has also been referred to as isotelesis (from Greek ἴσος isos "equal" and τέλεσις telesis: "the intelligent direction of effort toward the achievement of an end") when in games involving superrationality.
Overview
In business, equifinality implies that firms may establish similar competitive advantages based on substantially different competencies.
In psychology, equifinality refers to how different early experiences in life (e.g., parental divorce, physical abuse, parental substance abuse) can lead to similar outcomes (e.g., childhood depression). In other words, there are many different early experiences that can lead to the same psychological disorder.
In archaeology, equifinality refers to how different historical processes may lead to a similar outcome or social formation. For example, the development of agriculture or the bow and arrow occurred independently in many different areas of the world, yet for different reasons and through different historical trajectories. This highlights that generalizations based on cross-cultural comparisons cannot be made uncritically.
In Earth and environmental Sciences, two general types of equifinality are distinguished: process equifinality (concerned with real-world open systems) and model equifinality (concerned with conceptual open systems). For example, process equifinality in geomorphology indicates that similar landforms might arise as a result of quite different sets of processes. Model equifinality refers to a condition where distinct configurations of model components (e.g. distinct model parameter values) can lead to similar or equally acceptable simulations (or representations of the real-world process of interest). This similarity or equal acceptability is conditional on the objective functions and criteria of acceptability defined by the modeler. While model equifinality has various facets, model parameter and structural equifinality are mostly known and focused in modeling studies. Equifinality (particularly parameter equifinality) and Monte Carlo experiments are the foundation of the GLUE method that was the first generalised method for uncertainty assessment in hydrological modeling. GLUE is now widely used within and beyond environmental modeling.
See also
GLUE – Generalized Likelihood Uncertainty Estimation (when modeling environmental systems there are many different model structures and parameter sets that may be behavioural or acceptable in reproducing the behaviour of that system)
TMTOWTDI – Computer programming maxim: "there is more than one way to do it"
Underdetermination
Consilience
Convergent evolution
Teleonomy
Degeneracy (biology)
Kruskal's principle
Multicollinearity
References
Publications
Bertalanffy, Ludwig von, General Systems Theory, 1968
Beven, K.J. and Binley, A.M., 1992. The future of distributed models: model calibration and uncertainty prediction, Hydrological Processes, 6, pp. 279–298.
Beven, K.J. and Freer, J., 2001a. Equifinality, data assimilation, and uncertainty estimation in mechanistic modelling of complex environmental systems, Journal of Hydrology, 249, 11–29.
Croft, Gary W., Glossary of Systems Theory and Practice for the Applied Behavioral Sciences, Syntropy Incorporated, Freeland, WA, Prepublication Review Copy, 1996
Durkin, James E. (ed.), Living Groups: Group Psychotherapy and General System Theory, Brunner/Mazel, New York, 1981
Mash, E. J., & Wolfe, D. A. (2005). Abnormal Child Psychology (3rd edition). Wadsworth Canada. pp. 13–14.
Weisbord, Marvin R., Productive Workplaces: Organizing and Managing for Dignity, Meaning, and Community, Jossey-Bass Publishers, San Francisco, 1987
Tang, J.Y. and Zhuang, Q. (2008). Equifinality in parameterization of process-based biogeochemistry models: A significant uncertainty source to the estimation of regional carbon dynamics, J. Geophys. Res., 113, G04010.
Systems theory | 0.792024 | 0.975139 | 0.772333 |
Drug design | Drug design, often referred to as rational drug design or simply rational design, is the inventive process of finding new medications based on the knowledge of a biological target. The drug is most commonly an organic small molecule that activates or inhibits the function of a biomolecule such as a protein, which in turn results in a therapeutic benefit to the patient. In the most basic sense, drug design involves the design of molecules that are complementary in shape and charge to the biomolecular target with which they interact and therefore will bind to it. Drug design frequently but not necessarily relies on computer modeling techniques. This type of modeling is sometimes referred to as computer-aided drug design. Finally, drug design that relies on the knowledge of the three-dimensional structure of the biomolecular target is known as structure-based drug design. In addition to small molecules, biopharmaceuticals including peptides and especially therapeutic antibodies are an increasingly important class of drugs and computational methods for improving the affinity, selectivity, and stability of these protein-based therapeutics have also been developed.
Definition
The phrase "drug design" is similar to ligand design (i.e., design of a molecule that will bind tightly to its target). Although design techniques for prediction of binding affinity are reasonably successful, there are many other properties, such as bioavailability, metabolic half-life, and side effects, that first must be optimized before a ligand can become a safe and effictive drug. These other characteristics are often difficult to predict with rational design techniques.
Due to high attrition rates, especially during clinical phases of drug development, more attention is being focused early in the drug design process on selecting candidate drugs whose physicochemical properties are predicted to result in fewer complications during development and hence more likely to lead to an approved, marketed drug. Furthermore, in vitro experiments complemented with computation methods are increasingly used in early drug discovery to select compounds with more favorable ADME (absorption, distribution, metabolism, and excretion) and toxicological profiles.
Drug targets
A biomolecular target (most commonly a protein or a nucleic acid) is a key molecule involved in a particular metabolic or signaling pathway that is associated with a specific disease condition or pathology or to the infectivity or survival of a microbial pathogen. Potential drug targets are not necessarily disease causing but must by definition be disease modifying. In some cases, small molecules will be designed to enhance or inhibit the target function in the specific disease modifying pathway. Small molecules (for example receptor agonists, antagonists, inverse agonists, or modulators; enzyme activators or inhibitors; or ion channel openers or blockers) will be designed that are complementary to the binding site of target. Small molecules (drugs) can be designed so as not to affect any other important "off-target" molecules (often referred to as antitargets) since drug interactions with off-target molecules may lead to undesirable side effects. Due to similarities in binding sites, closely related targets identified through sequence homology have the highest chance of cross reactivity and hence highest side effect potential.
Most commonly, drugs are organic small molecules produced through chemical synthesis, but biopolymer-based drugs (also known as biopharmaceuticals) produced through biological processes are becoming increasingly more common. In addition, mRNA-based gene silencing technologies may have therapeutic applications. For example, nanomedicines based on mRNA can streamline and expedite the drug development process, enabling transient and localized expression of immunostimulatory molecules. In vitro transcribed (IVT) mRNA allows for delivery to various accessible cell types via the blood or alternative pathways. The use of IVT mRNA serves to convey specific genetic information into a person's cells, with the primary objective of preventing or altering a particular disease.
Drug discovery
Phenotypic drug discovery
Phenotypic drug discovery is a traditional drug discovery method, also known as forward pharmacology or classical pharmacology. It uses the process of phenotypic screening on collections of synthetic small molecules, natural products, or extracts within chemical libraries to pinpoint substances exhibiting beneficial therapeutic effects. This method is to first discover the in vivo or in vitro functional activity of drugs (such as extract drugs or natural products), and then perform target identification. Phenotypic discovery uses a practical and target-independent approach to generate initial leads, aiming to discover pharmacologically active compounds and therapeutics that operate through novel drug mechanisms. This method allows the exploration of disease phenotypes to find potential treatments for conditions with unknown, complex, or multifactorial origins, where the understanding of molecular targets is insufficient for effective intervention.
Rational drug discovery
Rational drug design (also called reverse pharmacology) begins with a hypothesis that modulation of a specific biological target may have therapeutic value. In order for a biomolecule to be selected as a drug target, two essential pieces of information are required. The first is evidence that modulation of the target will be disease modifying. This knowledge may come from, for example, disease linkage studies that show an association between mutations in the biological target and certain disease states. The second is that the target is capable of binding to a small molecule and that its activity can be modulated by the small molecule.
Once a suitable target has been identified, the target is normally cloned and produced and purified. The purified protein is then used to establish a screening assay. In addition, the three-dimensional structure of the target may be determined.
The search for small molecules that bind to the target is begun by screening libraries of potential drug compounds. This may be done by using the screening assay (a "wet screen"). In addition, if the structure of the target is available, a virtual screen may be performed of candidate drugs. Ideally, the candidate drug compounds should be "drug-like", that is they should possess properties that are predicted to lead to oral bioavailability, adequate chemical and metabolic stability, and minimal toxic effects. Several methods are available to estimate druglikeness such as Lipinski's Rule of Five and a range of scoring methods such as lipophilic efficiency. Several methods for predicting drug metabolism have also been proposed in the scientific literature.
Due to the large number of drug properties that must be simultaneously optimized during the design process, multi-objective optimization techniques are sometimes employed. Finally because of the limitations in the current methods for prediction of activity, drug design is still very much reliant on serendipity and bounded rationality.
Computer-aided drug design
The most fundamental goal in drug design is to predict whether a given molecule will bind to a target and if so how strongly. Molecular mechanics or molecular dynamics is most often used to estimate the strength of the intermolecular interaction between the small molecule and its biological target. These methods are also used to predict the conformation of the small molecule and to model conformational changes in the target that may occur when the small molecule binds to it. Semi-empirical, ab initio quantum chemistry methods, or density functional theory are often used to provide optimized parameters for the molecular mechanics calculations and also provide an estimate of the electronic properties (electrostatic potential, polarizability, etc.) of the drug candidate that will influence binding affinity.
Molecular mechanics methods may also be used to provide semi-quantitative prediction of the binding affinity. Also, knowledge-based scoring function may be used to provide binding affinity estimates. These methods use linear regression, machine learning, neural nets or other statistical techniques to derive predictive binding affinity equations by fitting experimental affinities to computationally derived interaction energies between the small molecule and the target.
Ideally, the computational method will be able to predict affinity before a compound is synthesized and hence in theory only one compound needs to be synthesized, saving enormous time and cost. The reality is that present computational methods are imperfect and provide, at best, only qualitatively accurate estimates of affinity. In practice, it requires several iterations of design, synthesis, and testing before an optimal drug is discovered. Computational methods have accelerated discovery by reducing the number of iterations required and have often provided novel structures.
Computer-aided drug design may be used at any of the following stages of drug discovery:
hit identification using virtual screening (structure- or ligand-based design)
hit-to-lead optimization of affinity and selectivity (structure-based design, QSAR, etc.)
lead optimization of other pharmaceutical properties while maintaining affinity
In order to overcome the insufficient prediction of binding affinity calculated by recent scoring functions, the protein-ligand interaction and compound 3D structure information are used for analysis. For structure-based drug design, several post-screening analyses focusing on protein-ligand interaction have been developed for improving enrichment and effectively mining potential candidates:
Consensus scoring
Selecting candidates by voting of multiple scoring functions
May lose the relationship between protein-ligand structural information and scoring criterion
Cluster analysis
Represent and cluster candidates according to protein-ligand 3D information
Needs meaningful representation of protein-ligand interactions.
Types
There are two major types of drug design. The first is referred to as ligand-based drug design and the second, structure-based drug design.
Ligand-based
Ligand-based drug design (or indirect drug design) relies on knowledge of other molecules that bind to the biological target of interest. These other molecules may be used to derive a pharmacophore model that defines the minimum necessary structural characteristics a molecule must possess in order to bind to the target. A model of the biological target may be built based on the knowledge of what binds to it, and this model in turn may be used to design new molecular entities that interact with the target. Alternatively, a quantitative structure-activity relationship (QSAR), in which a correlation between calculated properties of molecules and their experimentally determined biological activity, may be derived. These QSAR relationships in turn may be used to predict the activity of new analogs.
Structure-based
Structure-based drug design (or direct drug design) relies on knowledge of the three dimensional structure of the biological target obtained through methods such as x-ray crystallography or NMR spectroscopy. If an experimental structure of a target is not available, it may be possible to create a homology model of the target based on the experimental structure of a related protein. Using the structure of the biological target, candidate drugs that are predicted to bind with high affinity and selectivity to the target may be designed using interactive graphics and the intuition of a medicinal chemist. Alternatively, various automated computational procedures may be used to suggest new drug candidates.
Current methods for structure-based drug design can be divided roughly into three main categories. The first method is identification of new ligands for a given receptor by searching large databases of 3D structures of small molecules to find those fitting the binding pocket of the receptor using fast approximate docking programs. This method is known as virtual screening.
A second category is de novo design of new ligands. In this method, ligand molecules are built up within the constraints of the binding pocket by assembling small pieces in a stepwise manner. These pieces can be either individual atoms or molecular fragments. The key advantage of such a method is that novel structures, not contained in any database, can be suggested. A third method is the optimization of known ligands by evaluating proposed analogs within the binding cavity.
Binding site identification
Binding site identification is the first step in structure based design. If the structure of the target or a sufficiently similar homolog is determined in the presence of a bound ligand, then the ligand should be observable in the structure in which case location of the binding site is trivial. However, there may be unoccupied allosteric binding sites that may be of interest. Furthermore, it may be that only apoprotein (protein without ligand) structures are available and the reliable identification of unoccupied sites that have the potential to bind ligands with high affinity is non-trivial. In brief, binding site identification usually relies on identification of concave surfaces on the protein that can accommodate drug sized molecules that also possess appropriate "hot spots" (hydrophobic surfaces, hydrogen bonding sites, etc.) that drive ligand binding.
Scoring functions
Structure-based drug design attempts to use the structure of proteins as a basis for designing new ligands by applying the principles of molecular recognition. Selective high affinity binding to the target is generally desirable since it leads to more efficacious drugs with fewer side effects. Thus, one of the most important principles for designing or obtaining potential new ligands is to predict the binding affinity of a certain ligand to its target (and known antitargets) and use the predicted affinity as a criterion for selection.
One early general-purposed empirical scoring function to describe the binding energy of ligands to receptors was developed by Böhm. This empirical scoring function took the form:
where:
ΔG0 – empirically derived offset that in part corresponds to the overall loss of translational and rotational entropy of the ligand upon binding.
ΔGhb – contribution from hydrogen bonding
ΔGionic – contribution from ionic interactions
ΔGlip – contribution from lipophilic interactions where |Alipo| is surface area of lipophilic contact between the ligand and receptor
ΔGrot – entropy penalty due to freezing a rotatable in the ligand bond upon binding
A more general thermodynamic "master" equation is as follows:
where:
desolvation – enthalpic penalty for removing the ligand from solvent
motion – entropic penalty for reducing the degrees of freedom when a ligand binds to its receptor
configuration – conformational strain energy required to put the ligand in its "active" conformation
interaction – enthalpic gain for "resolvating" the ligand with its receptor
The basic idea is that the overall binding free energy can be decomposed into independent components that are known to be important for the binding process. Each component reflects a certain kind of free energy alteration during the binding process between a ligand and its target receptor. The Master Equation is the linear combination of these components. According to Gibbs free energy equation, the relation between dissociation equilibrium constant, Kd, and the components of free energy was built.
Various computational methods are used to estimate each of the components of the master equation. For example, the change in polar surface area upon ligand binding can be used to estimate the desolvation energy. The number of rotatable bonds frozen upon ligand binding is proportional to the motion term. The configurational or strain energy can be estimated using molecular mechanics calculations. Finally the interaction energy can be estimated using methods such as the change in non polar surface, statistically derived potentials of mean force, the number of hydrogen bonds formed, etc. In practice, the components of the master equation are fit to experimental data using multiple linear regression. This can be done with a diverse training set including many types of ligands and receptors to produce a less accurate but more general "global" model or a more restricted set of ligands and receptors to produce a more accurate but less general "local" model.
Examples
A particular example of rational drug design involves the use of three-dimensional information about biomolecules obtained from such techniques as X-ray crystallography and NMR spectroscopy. Computer-aided drug design in particular becomes much more tractable when there is a high-resolution structure of a target protein bound to a potent ligand. This approach to drug discovery is sometimes referred to as structure-based drug design. The first unequivocal example of the application of structure-based drug design leading to an approved drug is the carbonic anhydrase inhibitor dorzolamide, which was approved in 1995.
Another case study in rational drug design is imatinib, a tyrosine kinase inhibitor designed specifically for the bcr-abl fusion protein that is characteristic for Philadelphia chromosome-positive leukemias (chronic myelogenous leukemia and occasionally acute lymphocytic leukemia). Imatinib is substantially different from previous drugs for cancer, as most agents of chemotherapy simply target rapidly dividing cells, not differentiating between cancer cells and other tissues.
Additional examples include:
Many of the atypical antipsychotics
Cimetidine, the prototypical H2-receptor antagonist from which the later members of the class were developed
Selective COX-2 inhibitor NSAIDs
Enfuvirtide, a peptide HIV entry inhibitor
Nonbenzodiazepines like zolpidem and zopiclone
Raltegravir, an HIV integrase inhibitor
SSRIs (selective serotonin reuptake inhibitors), a class of antidepressants
Zanamivir, an antiviral drug
Drug screening
Types of drug screening include phenotypic screening, high-throughput screening, and virtual screening. Phenotypic screening is characterized by the process of screening drugs using cellular or animal disease models to identify compounds that alter the phenotype and produce beneficial disease-related effects. Emerging technologies in high-throughput screening substantially enhance processing speed and decrease the required detection volume. Virtual screening is completed by computer, enabling a large number of molecules can be screened with a short cycle and low cost. Virtual screening uses a range of computational methods that empower chemists to reduce extensive virtual libraries into more manageable sizes.
Case studies
5-HT3 antagonists
Acetylcholine receptor agonists
Angiotensin receptor antagonists
Bcr-Abl tyrosine-kinase inhibitors
Cannabinoid receptor antagonists
CCR5 receptor antagonists
Cyclooxygenase 2 inhibitors
Dipeptidyl peptidase-4 inhibitors
HIV protease inhibitors
NK1 receptor antagonists
Non-nucleoside reverse transcriptase inhibitors
Nucleoside and nucleotide reverse transcriptase inhibitors
PDE5 inhibitors
Proton pump inhibitors
Renin inhibitors
Triptans
TRPV1 antagonists
c-Met inhibitors
Criticism
It has been argued that the highly rigid and focused nature of rational drug design suppresses serendipity in drug discovery.
See also
Bioisostere
Bioinformatics
Cheminformatics
Drug development
Drug discovery
List of pharmaceutical companies
Medicinal chemistry
Molecular design software
Molecular modification
Retrometabolic drug design
References
External links
[Drug Design Org](https://www.drugdesign.org/chapters/drug-design/)
Design of experiments
Drug discovery
Medicinal chemistry | 0.779396 | 0.990914 | 0.772314 |
Homology modeling | Homology modeling, also known as comparative modeling of protein, refers to constructing an atomic-resolution model of the "target" protein from its amino acid sequence and an experimental three-dimensional structure of a related homologous protein (the "template"). Homology modeling relies on the identification of one or more known protein structures likely to resemble the structure of the query sequence, and on the production of a sequence alignment that maps residues in the query sequence to residues in the template sequence. It has been seen that protein structures are more conserved than protein sequences amongst homologues, but sequences falling below a 20% sequence identity can have very different structure.
Evolutionarily related proteins have similar sequences and naturally occurring homologous proteins have similar protein structure.
It has been shown that three-dimensional protein structure is evolutionarily more conserved than would be expected on the basis of sequence conservation alone.
The sequence alignment and template structure are then used to produce a structural model of the target. Because protein structures are more conserved than DNA sequences, and detectable levels of sequence similarity usually imply significant structural similarity.
The quality of the homology model is dependent on the quality of the sequence alignment and template structure. The approach can be complicated by the presence of alignment gaps (commonly called indels) that indicate a structural region present in the target but not in the template, and by structure gaps in the template that arise from poor resolution in the experimental procedure (usually X-ray crystallography) used to solve the structure. Model quality declines with decreasing sequence identity; a typical model has ~1–2 Å root mean square deviation between the matched Cα atoms at 70% sequence identity but only 2–4 Å agreement at 25% sequence identity. However, the errors are significantly higher in the loop regions, where the amino acid sequences of the target and template proteins may be completely different.
Regions of the model that were constructed without a template, usually by loop modeling, are generally much less accurate than the rest of the model. Errors in side chain packing and position also increase with decreasing identity, and variations in these packing configurations have been suggested as a major reason for poor model quality at low identity. Taken together, these various atomic-position errors are significant and impede the use of homology models for purposes that require atomic-resolution data, such as drug design and protein–protein interaction predictions; even the quaternary structure of a protein may be difficult to predict from homology models of its subunit(s). Nevertheless, homology models can be useful in reaching qualitative conclusions about the biochemistry of the query sequence, especially in formulating hypotheses about why certain residues are conserved, which may in turn lead to experiments to test those hypotheses. For example, the spatial arrangement of conserved residues may suggest whether a particular residue is conserved to stabilize the folding, to participate in binding some small molecule, or to foster association with another protein or nucleic acid.
Homology modeling can produce high-quality structural models when the target and template are closely related, which has inspired the formation of a structural genomics consortium dedicated to the production of representative experimental structures for all classes of protein folds. The chief inaccuracies in homology modeling, which worsen with lower sequence identity, derive from errors in the initial sequence alignment and from improper template selection. Like other methods of structure prediction, current practice in homology modeling is assessed in a biennial large-scale experiment known as the Critical Assessment of Techniques for Protein Structure Prediction, or Critical Assessment of Structure Prediction (CASP).
Motive
The method of homology modeling is based on the observation that protein tertiary structure is better conserved than amino acid sequence. Thus, even proteins that have diverged appreciably in sequence but still share detectable similarity will also share common structural properties, particularly the overall fold. Because it is difficult and time-consuming to obtain experimental structures from methods such as X-ray crystallography and protein NMR for every protein of interest, homology modeling can provide useful structural models for generating hypotheses about a protein's function and directing further experimental work.
There are exceptions to the general rule that proteins sharing significant sequence identity will share a fold. For example, a judiciously chosen set of mutations of less than 50% of a protein can cause the protein to adopt a completely different fold. However, such a massive structural rearrangement is unlikely to occur in evolution, especially since the protein is usually under the constraint that it must fold properly and carry out its function in the cell. Consequently, the roughly folded structure of a protein (its "topology") is conserved longer than its amino-acid sequence and much longer than the corresponding DNA sequence; in other words, two proteins may share a similar fold even if their evolutionary relationship is so distant that it cannot be discerned reliably. For comparison, the function of a protein is conserved much less than the protein sequence, since relatively few changes in amino-acid sequence are required to take on a related function.
Steps in model production
The homology modeling procedure can be broken down into four sequential steps: template selection, target-template alignment, model construction, and model assessment. The first two steps are often essentially performed together, as the most common methods of identifying templates rely on the production of sequence alignments; however, these alignments may not be of sufficient quality because database search techniques prioritize speed over alignment quality. These processes can be performed iteratively to improve the quality of the final model, although quality assessments that are not dependent on the true target structure are still under development.
Optimizing the speed and accuracy of these steps for use in large-scale automated structure prediction is a key component of structural genomics initiatives, partly because the resulting volume of data will be too large to process manually and partly because the goal of structural genomics requires providing models of reasonable quality to researchers who are not themselves structure prediction experts.
Template selection and sequence alignment
The critical first step in homology modeling is the identification of the best template structure, if indeed any are available. The simplest method of template identification relies on serial pairwise sequence alignments aided by database search techniques such as FASTA and BLAST. More sensitive methods based on multiple sequence alignment – of which PSI-BLAST is the most common example – iteratively update their position-specific scoring matrix to successively identify more distantly related homologs. This family of methods has been shown to produce a larger number of potential templates and to identify better templates for sequences that have only distant relationships to any solved structure. Protein threading, also known as fold recognition or 3D-1D alignment, can also be used as a search technique for identifying templates to be used in traditional homology modeling methods. Recent CASP experiments indicate that some protein threading methods such as RaptorX are more sensitive than purely sequence(profile)-based methods when only distantly-related templates are available for the proteins under prediction. When performing a BLAST search, a reliable first approach is to identify hits with a sufficiently low E-value, which are considered sufficiently close in evolution to make a reliable homology model. Other factors may tip the balance in marginal cases; for example, the template may have a function similar to that of the query sequence, or it may belong to a homologous operon. However, a template with a poor E-value should generally not be chosen, even if it is the only one available, since it may well have a wrong structure, leading to the production of a misguided model. A better approach is to submit the primary sequence to fold-recognition servers or, better still, consensus meta-servers which improve upon individual fold-recognition servers by identifying similarities (consensus) among independent predictions.
Often several candidate template structures are identified by these approaches. Although some methods can generate hybrid models with better accuracy from multiple templates, most methods rely on a single template. Therefore, choosing the best template from among the candidates is a key step, and can affect the final accuracy of the structure significantly. This choice is guided by several factors, such as the similarity of the query and template sequences, of their functions, and of the predicted query and observed template secondary structures. Perhaps most importantly, the coverage of the aligned regions: the fraction of the query sequence structure that can be predicted from the template, and the plausibility of the resulting model. Thus, sometimes several homology models are produced for a single query sequence, with the most likely candidate chosen only in the final step.
It is possible to use the sequence alignment generated by the database search technique as the basis for the subsequent model production; however, more sophisticated approaches have also been explored. One proposal generates an ensemble of stochastically defined pairwise alignments between the target sequence and a single identified template as a means of exploring "alignment space" in regions of sequence with low local similarity. "Profile-profile" alignments that first generate a sequence profile of the target and systematically compare it to the sequence profiles of solved structures; the coarse-graining inherent in the profile construction is thought to reduce noise introduced by sequence drift in nonessential regions of the sequence.
Model generation
Given a template and an alignment, the information contained therein must be used to generate a three-dimensional structural model of the target, represented as a set of Cartesian coordinates for each atom in the protein. Three major classes of model generation methods have been proposed.
Fragment assembly
The original method of homology modeling relied on the assembly of a complete model from conserved structural fragments identified in closely related solved structures. For example, a modeling study of serine proteases in mammals identified a sharp distinction between "core" structural regions conserved in all experimental structures in the class, and variable regions typically located in the loops where the majority of the sequence differences were localized. Thus unsolved proteins could be modeled by first constructing the conserved core and then substituting variable regions from other proteins in the set of solved structures. Current implementations of this method differ mainly in the way they deal with regions that are not conserved or that lack a template. The variable regions are often constructed with the help of a protein fragment library.
Segment matching
The segment-matching method divides the target into a series of short segments, each of which is matched to its own template fitted from the Protein Data Bank. Thus, sequence alignment is done over segments rather than over the entire protein. Selection of the template for each segment is based on sequence similarity, comparisons of alpha carbon coordinates, and predicted steric conflicts arising from the van der Waals radii of the divergent atoms between target and template.
Satisfaction of spatial restraints
The most common current homology modeling method takes its inspiration from calculations required to construct a three-dimensional structure from data generated by NMR spectroscopy. One or more target-template alignments are used to construct a set of geometrical criteria that are then converted to probability density functions for each restraint. Restraints applied to the main protein internal coordinates – protein backbone distances and dihedral angles – serve as the basis for a global optimization procedure that originally used conjugate gradient energy minimization to iteratively refine the positions of all heavy atoms in the protein.
This method had been dramatically expanded to apply specifically to loop modeling, which can be extremely difficult due to the high flexibility of loops in proteins in aqueous solution. A more recent expansion applies the spatial-restraint model to electron density maps derived from cryoelectron microscopy studies, which provide low-resolution information that is not usually itself sufficient to generate atomic-resolution structural models. To address the problem of inaccuracies in initial target-template sequence alignment, an iterative procedure has also been introduced to refine the alignment on the basis of the initial structural fit. The most commonly used software in spatial restraint-based modeling is MODELLER and a database called ModBase has been established for reliable models generated with it.
Loop modeling
Regions of the target sequence that are not aligned to a template are modeled by loop modeling; they are the most susceptible to major modeling errors and occur with higher frequency when the target and template have low sequence identity. The coordinates of unmatched sections determined by loop modeling programs are generally much less accurate than those obtained from simply copying the coordinates of a known structure, particularly if the loop is longer than 10 residues. The first two sidechain dihedral angles (χ1 and χ2) can usually be estimated within 30° for an accurate backbone structure; however, the later dihedral angles found in longer side chains such as lysine and arginine are notoriously difficult to predict. Moreover, small errors in χ1 (and, to a lesser extent, in χ2) can cause relatively large errors in the positions of the atoms at the terminus of side chain; such atoms often have a functional importance, particularly when located near the active site.
Model assessment
A large number of methods have been developed for selecting a native-like structure from a set of models. Scoring functions have been based on both molecular mechanics energy functions (Lazaridis and Karplus 1999; Petrey and Honig 2000; Feig and Brooks 2002; Felts et al. 2002; Lee and Duan 2004), statistical potentials (Sippl 1995; Melo and Feytmans 1998; Samudrala and Moult 1998; Rojnuckarin and Subramaniam 1999; Lu and Skolnick 2001; Wallqvist et al. 2002; Zhou and Zhou 2002), residue environments (Luthy et al. 1992; Eisenberg et al. 1997; Park et al. 1997; Summa et al. 2005), local side-chain and backbone interactions (Fang and Shortle 2005), orientation-dependent properties (Buchete et al. 2004a,b; Hamelryck 2005), packing estimates (Berglund et al. 2004), solvation energy (Petrey and Honig 2000; McConkey et al. 2003; Wallner and Elofsson 2003; Berglund et al. 2004), hydrogen bonding (Kortemme et al. 2003), and geometric properties (Colovos and Yeates 1993; Kleywegt 2000; Lovell et al. 2003; Mihalek et al. 2003). A number of methods combine different potentials into a global score, usually using a linear combination of terms (Kortemme et al. 2003; Tosatto 2005), or with the help of machine learning techniques, such as neural networks (Wallner and Elofsson 2003) and support vector machines (SVM) (Eramian et al. 2006). Comparisons of different global model quality assessment programs can be found in recent papers by Pettitt et al. (2005), Tosatto (2005), and Eramian et al. (2006).
Less work has been reported on the local quality assessment of models. Local scores are important in the context of modeling because they can give an estimate of the reliability of different regions of a predicted structure. This information can be used in turn to determine which regions should be refined, which should be considered for modeling by multiple templates, and which should be predicted ab initio. Information on local model quality could also be used to reduce the combinatorial problem when considering alternative alignments; for example, by scoring different local models separately, fewer models would have to be built (assuming that the interactions between the separate regions are negligible or can be estimated separately).
One of the most widely used local scoring methods is Verify3D (Luthy et al. 1992; Eisenberg et al. 1997), which combines secondary structure, solvent accessibility, and polarity of residue environments. ProsaII (Sippl 1993), which is based on a combination of a pairwise statistical potential and a solvation term, is also applied extensively in model evaluation. Other methods include the Errat program (Colovos and Yeates 1993), which considers distributions of nonbonded atoms according to atom type and distance, and the energy strain method (Maiorov and Abagyan 1998), which uses differences from average residue energies in different environments to indicate which parts of a protein structure might be problematic. Melo and Feytmans (1998) use an atomic pairwise potential and a surface-based solvation potential (both knowledge-based) to evaluate protein structures. Apart from the energy strain method, which is a semiempirical approach based on the ECEPP3 force field (Nemethy et al. 1992), all of the local methods listed above are based on statistical potentials. A conceptually distinct approach is the ProQres method, which was very recently introduced by Wallner and Elofsson (2006). ProQres is based on a neural network that combines structural features to distinguish correct from incorrect regions. ProQres was shown to outperform earlier methodologies based on statistical approaches (Verify3D, ProsaII, and Errat). The data presented in Wallner and Elofsson's study suggests that their machine-learning approach based on structural features is indeed superior to statistics-based methods. However, the knowledge-based methods examined in their work, Verify3D (Luthy et al. 1992; Eisenberg et al. 1997), Prosa (Sippl 1993), and Errat (Colovos and Yeates 1993), are not based on newer statistical potentials.
Benchmarking
Several large-scale benchmarking efforts have been made to assess the relative quality of various current homology modeling methods. Critical Assessment of Structure Prediction (CASP) is a community-wide prediction experiment that runs every two years during the summer months and challenges prediction teams to submit structural models for a number of sequences whose structures have recently been solved experimentally but have not yet been published. Its partner Critical Assessment of Fully Automated Structure Prediction (CAFASP) has run in parallel with CASP but evaluates only models produced via fully automated servers. Continuously running experiments that do not have prediction 'seasons' focus mainly on benchmarking publicly available webservers. LiveBench and EVA run continuously to assess participating servers' performance in prediction of imminently released structures from the PDB. CASP and CAFASP serve mainly as evaluations of the state of the art in modeling, while the continuous assessments seek to evaluate the model quality that would be obtained by a non-expert user employing publicly available tools.
Accuracy
The accuracy of the structures generated by homology modeling is highly dependent on the sequence identity between target and template. Above 50% sequence identity, models tend to be reliable, with only minor errors in side chain packing and rotameric state, and an overall RMSD between the modeled and the experimental structure falling around 1 Å. This error is comparable to the typical resolution of a structure solved by NMR. In the 30–50% identity range, errors can be more severe and are often located in loops. Below 30% identity, serious errors occur, sometimes resulting in the basic fold being mis-predicted. This low-identity region is often referred to as the "twilight zone" within which homology modeling is extremely difficult, and to which it is possibly less suited than fold recognition methods.
At high sequence identities, the primary source of error in homology modeling derives from the choice of the template or templates on which the model is based, while lower identities exhibit serious errors in sequence alignment that inhibit the production of high-quality models. It has been suggested that the major impediment to quality model production is inadequacies in sequence alignment, since "optimal" structural alignments between two proteins of known structure can be used as input to current modeling methods to produce quite accurate reproductions of the original experimental structure.
Attempts have been made to improve the accuracy of homology models built with existing methods by subjecting them to molecular dynamics simulation in an effort to improve their RMSD to the experimental structure. However, current force field parameterizations may not be sufficiently accurate for this task, since homology models used as starting structures for molecular dynamics tend to produce slightly worse structures. Slight improvements have been observed in cases where significant restraints were used during the simulation.
Sources of error
The two most common and large-scale sources of error in homology modeling are poor template selection and inaccuracies in target-template sequence alignment. Controlling for these two factors by using a structural alignment, or a sequence alignment produced on the basis of comparing two solved structures, dramatically reduces the errors in final models; these "gold standard" alignments can be used as input to current modeling methods to produce quite accurate reproductions of the original experimental structure. Results from the most recent CASP experiment suggest that "consensus" methods collecting the results of multiple fold recognition and multiple alignment searches increase the likelihood of identifying the correct template; similarly, the use of multiple templates in the model-building step may be worse than the use of the single correct template but better than the use of a single suboptimal one. Alignment errors may be minimized by the use of a multiple alignment even if only one template is used, and by the iterative refinement of local regions of low similarity.
A lesser source of model errors are errors in the template structure. The PDBREPORT database lists several million, mostly very small but occasionally dramatic, errors in experimental (template) structures that have been deposited in the PDB.
Serious local errors can arise in homology models where an insertion or deletion mutation or a gap in a solved structure result in a region of target sequence for which there is no corresponding template. This problem can be minimized by the use of multiple templates, but the method is complicated by the templates' differing local structures around the gap and by the likelihood that a missing region in one experimental structure is also missing in other structures of the same protein family. Missing regions are most common in loops where high local flexibility increases the difficulty of resolving the region by structure-determination methods. Although some guidance is provided even with a single template by the positioning of the ends of the missing region, the longer the gap, the more difficult it is to model. Loops of up to about 9 residues can be modeled with moderate accuracy in some cases if the local alignment is correct. Larger regions are often modeled individually using ab initio structure prediction techniques, although this approach has met with only isolated success.
The rotameric states of side chains and their internal packing arrangement also present difficulties in homology modeling, even in targets for which the backbone structure is relatively easy to predict. This is partly due to the fact that many side chains in crystal structures are not in their "optimal" rotameric state as a result of energetic factors in the hydrophobic core and in the packing of the individual molecules in a protein crystal. One method of addressing this problem requires searching a rotameric library to identify locally low-energy combinations of packing states. It has been suggested that a major reason that homology modeling so difficult when target-template sequence identity lies below 30% is that such proteins have broadly similar folds but widely divergent side chain packing arrangements.
Utility
Uses of the structural models include protein–protein interaction prediction, protein–protein docking, molecular docking, and functional annotation of genes identified in an organism's genome. Even low-accuracy homology models can be useful for these purposes, because their inaccuracies tend to be located in the loops on the protein surface, which are normally more variable even between closely related proteins. The functional regions of the protein, especially its active site, tend to be more highly conserved and thus more accurately modeled.
Homology models can also be used to identify subtle differences between related proteins that have not all been solved structurally. For example, the method was used to identify cation binding sites on the Na+/K+ ATPase and to propose hypotheses about different ATPases' binding affinity. Used in conjunction with molecular dynamics simulations, homology models can also generate hypotheses about the kinetics and dynamics of a protein, as in studies of the ion selectivity of a potassium channel. Large-scale automated modeling of all identified protein-coding regions in a genome has been attempted for the yeast Saccharomyces cerevisiae, resulting in nearly 1000 quality models for proteins whose structures had not yet been determined at the time of the study, and identifying novel relationships between 236 yeast proteins and other previously solved structures.
See also
Protein structure prediction
Protein structure prediction software
Protein threading
Molecular replacement
References
Bioinformatics
Protein methods
Protein structure | 0.791641 | 0.975561 | 0.772294 |
Monte Carlo method | Monte Carlo methods, or Monte Carlo experiments, are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. The underlying concept is to use randomness to solve problems that might be deterministic in principle. The name comes from the Monte Carlo Casino in Monaco, where the primary developer of the method, mathematician Stanislaw Ulam, was inspired by his uncle's gambling habits.
Monte Carlo methods are mainly used in three distinct problem classes: optimization, numerical integration, and generating draws from a probability distribution. They can also be used to model phenomena with significant uncertainty in inputs, such as calculating the risk of a nuclear power plant failure. Monte Carlo methods are often implemented using computer simulations, and they can provide approximate solutions to problems that are otherwise intractable or too complex to analyze mathematically.
Monte Carlo methods are widely used in various fields of science, engineering, and mathematics, such as physics, chemistry, biology, statistics, artificial intelligence, finance, and cryptography. They have also been applied to social sciences, such as sociology, psychology, and political science. Monte Carlo methods have been recognized as one of the most important and influential ideas of the 20th century, and they have enabled many scientific and technological breakthroughs.
Monte Carlo methods also have some limitations and challenges, such as the trade-off between accuracy and computational cost, the curse of dimensionality, the reliability of random number generators, and the verification and validation of the results.
Overview
Monte Carlo methods vary, but tend to follow a particular pattern:
Define a domain of possible inputs
Generate inputs randomly from a probability distribution over the domain
Perform a deterministic computation of the outputs
Aggregate the results
For example, consider a quadrant (circular sector) inscribed in a unit square. Given that the ratio of their areas is , the value of can be approximated using a Monte Carlo method:
Draw a square, then inscribe a quadrant within it
Uniformly scatter a given number of points over the square
Count the number of points inside the quadrant, i.e. having a distance from the origin of less than 1
The ratio of the inside-count and the total-sample-count is an estimate of the ratio of the two areas, . Multiply the result by 4 to estimate .
In this procedure the domain of inputs is the square that circumscribes the quadrant. One can generate random inputs by scattering grains over the square then perform a computation on each input (test whether it falls within the quadrant). Aggregating the results yields our final result, the approximation of .
There are two important considerations:
If the points are not uniformly distributed, then the approximation will be poor.
The approximation is generally poor if only a few points are randomly placed in the whole square. On average, the approximation improves as more points are placed.
Uses of Monte Carlo methods require large amounts of random numbers, and their use benefitted greatly from pseudorandom number generators, which are far quicker to use than the tables of random numbers that had been previously used for statistical sampling.
Application
Monte Carlo methods are often used in physical and mathematical problems and are most useful when it is difficult or impossible to use other approaches. Monte Carlo methods are mainly used in three problem classes: optimization, numerical integration, and generating draws from a probability distribution.
In physics-related problems, Monte Carlo methods are useful for simulating systems with many coupled degrees of freedom, such as fluids, disordered materials, strongly coupled solids, and cellular structures (see cellular Potts model, interacting particle systems, McKean–Vlasov processes, kinetic models of gases).
Other examples include modeling phenomena with significant uncertainty in inputs such as the calculation of risk in business and, in mathematics, evaluation of multidimensional definite integrals with complicated boundary conditions. In application to systems engineering problems (space, oil exploration, aircraft design, etc.), Monte Carlo–based predictions of failure, cost overruns and schedule overruns are routinely better than human intuition or alternative "soft" methods.
In principle, Monte Carlo methods can be used to solve any problem having a probabilistic interpretation. By the law of large numbers, integrals described by the expected value of some random variable can be approximated by taking the empirical mean ( the 'sample mean') of independent samples of the variable. When the probability distribution of the variable is parameterized, mathematicians often use a Markov chain Monte Carlo (MCMC) sampler. The central idea is to design a judicious Markov chain model with a prescribed stationary probability distribution. That is, in the limit, the samples being generated by the MCMC method will be samples from the desired (target) distribution. By the ergodic theorem, the stationary distribution is approximated by the empirical measures of the random states of the MCMC sampler.
In other problems, the objective is generating draws from a sequence of probability distributions satisfying a nonlinear evolution equation. These flows of probability distributions can always be interpreted as the distributions of the random states of a Markov process whose transition probabilities depend on the distributions of the current random states (see McKean–Vlasov processes, nonlinear filtering equation). In other instances, a flow of probability distributions with an increasing level of sampling complexity arise (path spaces models with an increasing time horizon, Boltzmann–Gibbs measures associated with decreasing temperature parameters, and many others). These models can also be seen as the evolution of the law of the random states of a nonlinear Markov chain. A natural way to simulate these sophisticated nonlinear Markov processes is to sample multiple copies of the process, replacing in the evolution equation the unknown distributions of the random states by the sampled empirical measures. In contrast with traditional Monte Carlo and MCMC methodologies, these mean-field particle techniques rely on sequential interacting samples. The terminology mean field reflects the fact that each of the samples ( particles, individuals, walkers, agents, creatures, or phenotypes) interacts with the empirical measures of the process. When the size of the system tends to infinity, these random empirical measures converge to the deterministic distribution of the random states of the nonlinear Markov chain, so that the statistical interaction between particles vanishes.
Simple Monte Carlo
Suppose one wants to know the expected value μ of a population (and knows that μ exists), but does not have a formula available to compute it. The simple Monte Carlo method gives an estimate for μ by running n simulations and averaging the simulations’ results. It has no restrictions on the probability distribution of the inputs to the simulations, requiring only that the inputs are randomly generated and are independent of each other and that μ exists. A sufficiently large n will produce a value for m that is arbitrarily close to μ; more formally, it will be the case that, for any ε > 0, |μ – m| ≤ ε.
Typically, the algorithm to obtain m is
s = 0;
for i = 1 to n do
run the simulation for the ith time, giving result ri;
s = s + ri;
repeat
m = s / n;
An example
Suppose we want to know how many times we should expect to throw three eight-sided dice for the total of the dice throws to be at least T. We know the expected value exists. The dice throws are randomly distributed and independent of each other. So simple Monte Carlo is applicable:
s = 0;
for i = 1 to n do
throw the three dice until T is met or first exceeded; ri = the number of throws;
s = s + ri;
repeat
m = s / n;
If n is large enough, m will be within ε of μ for any ε > 0.
Determining a sufficiently large n
General formula
Let ε = |μ – m| > 0. Choose the desired confidence level – the percent chance that, when the Monte Carlo algorithm completes, m is indeed within ε of μ. Let z be the z-score corresponding to that confidence level.
Let s2 be the estimated variance, sometimes called the “sample” variance; it is the variance of the results obtained from a relatively small number k of “sample” simulations. Choose a k; Driels and Shin observe that “even for sample sizes an order of magnitude lower than the number required, the calculation of that number is quite stable."
The following algorithm computes s2 in one pass while minimizing the possibility that accumulated numerical error produces erroneous results:
s1 = 0;
run the simulation for the first time, producing result r1;
m1 = r1; //mi is the mean of the first i simulations
for i = 2 to k do
run the simulation for the ith time, producing result ri;
δi = ri - mi-1;
mi = mi-1 + (1/i)δi;
si = si-1 + ((i - 1)/i)(δi)2;
repeat
s2 = sk/(k - 1);
Note that, when the algorithm completes, mk is the mean of the k results.
n is sufficiently large when
If n ≤ k, then mk = m; sufficient sample simulations were done to ensure that mk is within ε of μ. If n > k, then n simulations can be run “from scratch,” or, since k simulations have already been done, one can just run n – k more simulations and add their results into those from the sample simulations:
s = mk * k;
for i = k + 1 to n do
run the simulation for the ith time, giving result ri;
s = s + ri;
m = s / n;
A formula when simulations' results are bounded
An alternate formula can be used in the special case where all simulation results are bounded above and below.
Choose a value for ε that is twice the maximum allowed difference between μ and m. Let 0 < δ < 100 be the desired confidence level, expressed as a percentage. Let every simulation result r1, r2, …ri, … rn be such that a ≤ ri ≤ b for finite a and b. To have confidence of at least δ that |μ – m| < ε/2, use a value for n such that
For example, if δ = 99%, then n ≥ 2(b – a )2ln(2/0.01)/ε2 ≈ 10.6(b – a )2/ε2.
Computational costs
Despite its conceptual and algorithmic simplicity, the computational cost associated with a Monte Carlo simulation can be staggeringly high. In general the method requires many samples to get a good approximation, which may incur an arbitrarily large total runtime if the processing time of a single sample is high. Although this is a severe limitation in very complex problems, the embarrassingly parallel nature of the algorithm allows this large cost to be reduced (perhaps to a feasible level) through parallel computing strategies in local processors, clusters, cloud computing, GPU, FPGA, etc.
History
Before the Monte Carlo method was developed, simulations tested a previously understood deterministic problem, and statistical sampling was used to estimate uncertainties in the simulations. Monte Carlo simulations invert this approach, solving deterministic problems using probabilistic metaheuristics (see simulated annealing).
An early variant of the Monte Carlo method was devised to solve the Buffon's needle problem, in which can be estimated by dropping needles on a floor made of parallel equidistant strips. In the 1930s, Enrico Fermi first experimented with the Monte Carlo method while studying neutron diffusion, but he did not publish this work.
In the late 1940s, Stanislaw Ulam invented the modern version of the Markov Chain Monte Carlo method while he was working on nuclear weapons projects at the Los Alamos National Laboratory. In 1946, nuclear weapons physicists at Los Alamos were investigating neutron diffusion in the core of a nuclear weapon. Despite having most of the necessary data, such as the average distance a neutron would travel in a substance before it collided with an atomic nucleus and how much energy the neutron was likely to give off following a collision, the Los Alamos physicists were unable to solve the problem using conventional, deterministic mathematical methods. Ulam proposed using random experiments. He recounts his inspiration as follows:
Being secret, the work of von Neumann and Ulam required a code name. A colleague of von Neumann and Ulam, Nicholas Metropolis, suggested using the name Monte Carlo, which refers to the Monte Carlo Casino in Monaco where Ulam's uncle would borrow money from relatives to gamble.
Monte Carlo methods were central to the simulations required for further postwar development of nuclear weapons, including the design of the H-bomb, though severely limited by the computational tools at the time. Von Neumann, Nicholas Metropolis and others programmed the ENIAC computer to perform the first fully automated Monte Carlo calculations, of a fission weapon core, in the spring of 1948. In the 1950s Monte Carlo methods were used at Los Alamos for the development of the hydrogen bomb, and became popularized in the fields of physics, physical chemistry, and operations research. The Rand Corporation and the U.S. Air Force were two of the major organizations responsible for funding and disseminating information on Monte Carlo methods during this time, and they began to find a wide application in many different fields.
The theory of more sophisticated mean-field type particle Monte Carlo methods had certainly started by the mid-1960s, with the work of Henry P. McKean Jr. on Markov interpretations of a class of nonlinear parabolic partial differential equations arising in fluid mechanics. An earlier pioneering article by Theodore E. Harris and Herman Kahn, published in 1951, used mean-field genetic-type Monte Carlo methods for estimating particle transmission energies. Mean-field genetic type Monte Carlo methodologies are also used as heuristic natural search algorithms (a.k.a. metaheuristic) in evolutionary computing. The origins of these mean-field computational techniques can be traced to 1950 and 1954 with the work of Alan Turing on genetic type mutation-selection learning machines and the articles by Nils Aall Barricelli at the Institute for Advanced Study in Princeton, New Jersey.
Quantum Monte Carlo, and more specifically diffusion Monte Carlo methods can also be interpreted as a mean-field particle Monte Carlo approximation of Feynman–Kac path integrals. The origins of Quantum Monte Carlo methods are often attributed to Enrico Fermi and Robert Richtmyer who developed in 1948 a mean-field particle interpretation of neutron-chain reactions, but the first heuristic-like and genetic type particle algorithm (a.k.a. Resampled or Reconfiguration Monte Carlo methods) for estimating ground state energies of quantum systems (in reduced matrix models) is due to Jack H. Hetherington in 1984. In molecular chemistry, the use of genetic heuristic-like particle methodologies (a.k.a. pruning and enrichment strategies) can be traced back to 1955 with the seminal work of Marshall N. Rosenbluth and Arianna W. Rosenbluth.
The use of Sequential Monte Carlo in advanced signal processing and Bayesian inference is more recent. It was in 1993, that Gordon et al., published in their seminal work the first application of a Monte Carlo resampling algorithm in Bayesian statistical inference. The authors named their algorithm 'the bootstrap filter', and demonstrated that compared to other filtering methods, their bootstrap algorithm does not require any assumption about that state-space or the noise of the system. Another pioneering article in this field was Genshiro Kitagawa's, on a related "Monte Carlo filter", and the ones by Pierre Del Moral and Himilcon Carvalho, Pierre Del Moral, André Monin and Gérard Salut on particle filters published in the mid-1990s. Particle filters were also developed in signal processing in 1989–1992 by P. Del Moral, J. C. Noyer, G. Rigal, and G. Salut in the LAAS-CNRS in a series of restricted and classified research reports with STCAN (Service Technique des Constructions et Armes Navales), the IT company DIGILOG, and the LAAS-CNRS (the Laboratory for Analysis and Architecture of Systems) on radar/sonar and GPS signal processing problems. These Sequential Monte Carlo methodologies can be interpreted as an acceptance-rejection sampler equipped with an interacting recycling mechanism.
From 1950 to 1996, all the publications on Sequential Monte Carlo methodologies, including the pruning and resample Monte Carlo methods introduced in computational physics and molecular chemistry, present natural and heuristic-like algorithms applied to different situations without a single proof of their consistency, nor a discussion on the bias of the estimates and on genealogical and ancestral tree based algorithms. The mathematical foundations and the first rigorous analysis of these particle algorithms were written by Pierre Del Moral in 1996.
Branching type particle methodologies with varying population sizes were also developed in the end of the 1990s by Dan Crisan, Jessica Gaines and Terry Lyons, and by Dan Crisan, Pierre Del Moral and Terry Lyons. Further developments in this field were described in 1999 to 2001 by P. Del Moral, A. Guionnet and L. Miclo.
Definitions
There is no consensus on how Monte Carlo should be defined. For example, Ripley defines most probabilistic modeling as stochastic simulation, with Monte Carlo being reserved for Monte Carlo integration and Monte Carlo statistical tests. Sawilowsky distinguishes between a simulation, a Monte Carlo method, and a Monte Carlo simulation: a simulation is a fictitious representation of reality, a Monte Carlo method is a technique that can be used to solve a mathematical or statistical problem, and a Monte Carlo simulation uses repeated sampling to obtain the statistical properties of some phenomenon (or behavior).
Here are the examples:
Simulation: Drawing one pseudo-random uniform variable from the interval [0,1] can be used to simulate the tossing of a coin: If the value is less than or equal to 0.50 designate the outcome as heads, but if the value is greater than 0.50 designate the outcome as tails. This is a simulation, but not a Monte Carlo simulation.
Monte Carlo method: Pouring out a box of coins on a table, and then computing the ratio of coins that land heads versus tails is a Monte Carlo method of determining the behavior of repeated coin tosses, but it is not a simulation.
Monte Carlo simulation: Drawing a large number of pseudo-random uniform variables from the interval [0,1] at one time, or once at many different times, and assigning values less than or equal to 0.50 as heads and greater than 0.50 as tails, is a Monte Carlo simulation of the behavior of repeatedly tossing a coin.
Kalos and Whitlock point out that such distinctions are not always easy to maintain. For example, the emission of radiation from atoms is a natural stochastic process. It can be simulated directly, or its average behavior can be described by stochastic equations that can themselves be solved using Monte Carlo methods. "Indeed, the same computer code can be viewed simultaneously as a 'natural simulation' or as a solution of the equations by natural sampling."
Convergence of the Monte Carlo simulation can be checked with the Gelman-Rubin statistic.
Monte Carlo and random numbers
The main idea behind this method is that the results are computed based on repeated random sampling and statistical analysis. The Monte Carlo simulation is, in fact, random experimentations, in the case that, the results of these experiments are not well known.
Monte Carlo simulations are typically characterized by many unknown parameters, many of which are difficult to obtain experimentally. Monte Carlo simulation methods do not always require truly random numbers to be useful (although, for some applications such as primality testing, unpredictability is vital). Many of the most useful techniques use deterministic, pseudorandom sequences, making it easy to test and re-run simulations. The only quality usually necessary to make good simulations is for the pseudo-random sequence to appear "random enough" in a certain sense.
What this means depends on the application, but typically they should pass a series of statistical tests. Testing that the numbers are uniformly distributed or follow another desired distribution when a large enough number of elements of the sequence are considered is one of the simplest and most common ones. Weak correlations between successive samples are also often desirable/necessary.
Sawilowsky lists the characteristics of a high-quality Monte Carlo simulation:
the (pseudo-random) number generator has certain characteristics (e.g. a long "period" before the sequence repeats)
the (pseudo-random) number generator produces values that pass tests for randomness
there are enough samples to ensure accurate results
the proper sampling technique is used
the algorithm used is valid for what is being modeled
it simulates the phenomenon in question.
Pseudo-random number sampling algorithms are used to transform uniformly distributed pseudo-random numbers into numbers that are distributed according to a given probability distribution.
Low-discrepancy sequences are often used instead of random sampling from a space as they ensure even coverage and normally have a faster order of convergence than Monte Carlo simulations using random or pseudorandom sequences. Methods based on their use are called quasi-Monte Carlo methods.
In an effort to assess the impact of random number quality on Monte Carlo simulation outcomes, astrophysical researchers tested cryptographically secure pseudorandom numbers generated via Intel's RDRAND instruction set, as compared to those derived from algorithms, like the Mersenne Twister, in Monte Carlo simulations of radio flares from brown dwarfs. No statistically significant difference was found between models generated with typical pseudorandom number generators and RDRAND for trials consisting of the generation of 107 random numbers.
Monte Carlo simulation versus "what if" scenarios
There are ways of using probabilities that are definitely not Monte Carlo simulations – for example, deterministic modeling using single-point estimates. Each uncertain variable within a model is assigned a "best guess" estimate. Scenarios (such as best, worst, or most likely case) for each input variable are chosen and the results recorded.
By contrast, Monte Carlo simulations sample from a probability distribution for each variable to produce hundreds or thousands of possible outcomes. The results are analyzed to get probabilities of different outcomes occurring. For example, a comparison of a spreadsheet cost construction model run using traditional "what if" scenarios, and then running the comparison again with Monte Carlo simulation and triangular probability distributions shows that the Monte Carlo analysis has a narrower range than the "what if" analysis. This is because the "what if" analysis gives equal weight to all scenarios (see quantifying uncertainty in corporate finance), while the Monte Carlo method hardly samples in the very low probability regions. The samples in such regions are called "rare events".
Applications
Monte Carlo methods are especially useful for simulating phenomena with significant uncertainty in inputs and systems with many coupled degrees of freedom. Areas of application include:
Physical sciences
Monte Carlo methods are very important in computational physics, physical chemistry, and related applied fields, and have diverse applications from complicated quantum chromodynamics calculations to designing heat shields and aerodynamic forms as well as in modeling radiation transport for radiation dosimetry calculations. In statistical physics, Monte Carlo molecular modeling is an alternative to computational molecular dynamics, and Monte Carlo methods are used to compute statistical field theories of simple particle and polymer systems. Quantum Monte Carlo methods solve the many-body problem for quantum systems. In radiation materials science, the binary collision approximation for simulating ion implantation is usually based on a Monte Carlo approach to select the next colliding atom. In experimental particle physics, Monte Carlo methods are used for designing detectors, understanding their behavior and comparing experimental data to theory. In astrophysics, they are used in such diverse manners as to model both galaxy evolution and microwave radiation transmission through a rough planetary surface. Monte Carlo methods are also used in the ensemble models that form the basis of modern weather forecasting.
Engineering
Monte Carlo methods are widely used in engineering for sensitivity analysis and quantitative probabilistic analysis in process design. The need arises from the interactive, co-linear and non-linear behavior of typical process simulations. For example,
In microelectronics engineering, Monte Carlo methods are applied to analyze correlated and uncorrelated variations in analog and digital integrated circuits.
In geostatistics and geometallurgy, Monte Carlo methods underpin the design of mineral processing flowsheets and contribute to quantitative risk analysis.
In fluid dynamics, in particular rarefied gas dynamics, where the Boltzmann equation is solved for finite Knudsen number fluid flows using the direct simulation Monte Carlo method in combination with highly efficient computational algorithms.
In autonomous robotics, Monte Carlo localization can determine the position of a robot. It is often applied to stochastic filters such as the Kalman filter or particle filter that forms the heart of the SLAM (simultaneous localization and mapping) algorithm.
In telecommunications, when planning a wireless network, the design must be proven to work for a wide variety of scenarios that depend mainly on the number of users, their locations and the services they want to use. Monte Carlo methods are typically used to generate these users and their states. The network performance is then evaluated and, if results are not satisfactory, the network design goes through an optimization process.
In reliability engineering, Monte Carlo simulation is used to compute system-level response given the component-level response.
In signal processing and Bayesian inference, particle filters and sequential Monte Carlo techniques are a class of mean-field particle methods for sampling and computing the posterior distribution of a signal process given some noisy and partial observations using interacting empirical measures.
Climate change and radiative forcing
The Intergovernmental Panel on Climate Change relies on Monte Carlo methods in probability density function analysis of radiative forcing.
Computational biology
Monte Carlo methods are used in various fields of computational biology, for example for Bayesian inference in phylogeny, or for studying biological systems such as genomes, proteins, or membranes.
The systems can be studied in the coarse-grained or ab initio frameworks depending on the desired accuracy.
Computer simulations allow monitoring of the local environment of a particular molecule to see if some chemical reaction is happening for instance. In cases where it is not feasible to conduct a physical experiment, thought experiments can be conducted (for instance: breaking bonds, introducing impurities at specific sites, changing the local/global structure, or introducing external fields).
Computer graphics
Path tracing, occasionally referred to as Monte Carlo ray tracing, renders a 3D scene by randomly tracing samples of possible light paths. Repeated sampling of any given pixel will eventually cause the average of the samples to converge on the correct solution of the rendering equation, making it one of the most physically accurate 3D graphics rendering methods in existence.
Applied statistics
The standards for Monte Carlo experiments in statistics were set by Sawilowsky. In applied statistics, Monte Carlo methods may be used for at least four purposes:
To compare competing statistics for small samples under realistic data conditions. Although type I error and power properties of statistics can be calculated for data drawn from classical theoretical distributions (e.g., normal curve, Cauchy distribution) for asymptotic conditions (i. e, infinite sample size and infinitesimally small treatment effect), real data often do not have such distributions.
To provide implementations of hypothesis tests that are more efficient than exact tests such as permutation tests (which are often impossible to compute) while being more accurate than critical values for asymptotic distributions.
To provide a random sample from the posterior distribution in Bayesian inference. This sample then approximates and summarizes all the essential features of the posterior.
To provide efficient random estimates of the Hessian matrix of the negative log-likelihood function that may be averaged to form an estimate of the Fisher information matrix.
Monte Carlo methods are also a compromise between approximate randomization and permutation tests. An approximate randomization test is based on a specified subset of all permutations (which entails potentially enormous housekeeping of which permutations have been considered). The Monte Carlo approach is based on a specified number of randomly drawn permutations (exchanging a minor loss in precision if a permutation is drawn twice—or more frequently—for the efficiency of not having to track which permutations have already been selected).
Artificial intelligence for games
Monte Carlo methods have been developed into a technique called Monte-Carlo tree search that is useful for searching for the best move in a game. Possible moves are organized in a search tree and many random simulations are used to estimate the long-term potential of each move. A black box simulator represents the opponent's moves.
The Monte Carlo tree search (MCTS) method has four steps:
Starting at root node of the tree, select optimal child nodes until a leaf node is reached.
Expand the leaf node and choose one of its children.
Play a simulated game starting with that node.
Use the results of that simulated game to update the node and its ancestors.
The net effect, over the course of many simulated games, is that the value of a node representing a move will go up or down, hopefully corresponding to whether or not that node represents a good move.
Monte Carlo Tree Search has been used successfully to play games such as Go, Tantrix, Battleship, Havannah, and Arimaa.
Design and visuals
Monte Carlo methods are also efficient in solving coupled integral differential equations of radiation fields and energy transport, and thus these methods have been used in global illumination computations that produce photo-realistic images of virtual 3D models, with applications in video games, architecture, design, computer generated films, and cinematic special effects.
Search and rescue
The US Coast Guard utilizes Monte Carlo methods within its computer modeling software SAROPS in order to calculate the probable locations of vessels during search and rescue operations. Each simulation can generate as many as ten thousand data points that are randomly distributed based upon provided variables. Search patterns are then generated based upon extrapolations of these data in order to optimize the probability of containment (POC) and the probability of detection (POD), which together will equal an overall probability of success (POS). Ultimately this serves as a practical application of probability distribution in order to provide the swiftest and most expedient method of rescue, saving both lives and resources.
Finance and business
Monte Carlo simulation is commonly used to evaluate the risk and uncertainty that would affect the outcome of different decision options. Monte Carlo simulation allows the business risk analyst to incorporate the total effects of uncertainty in variables like sales volume, commodity and labor prices, interest and exchange rates, as well as the effect of distinct risk events like the cancellation of a contract or the change of a tax law.
Monte Carlo methods in finance are often used to evaluate investments in projects at a business unit or corporate level, or other financial valuations. They can be used to model project schedules, where simulations aggregate estimates for worst-case, best-case, and most likely durations for each task to determine outcomes for the overall project. Monte Carlo methods are also used in option pricing, default risk analysis. Additionally, they can be used to estimate the financial impact of medical interventions.
Law
A Monte Carlo approach was used for evaluating the potential value of a proposed program to help female petitioners in Wisconsin be successful in their applications for harassment and domestic abuse restraining orders. It was proposed to help women succeed in their petitions by providing them with greater advocacy thereby potentially reducing the risk of rape and physical assault. However, there were many variables in play that could not be estimated perfectly, including the effectiveness of restraining orders, the success rate of petitioners both with and without advocacy, and many others. The study ran trials that varied these variables to come up with an overall estimate of the success level of the proposed program as a whole.
Library science
Monte Carlo approach had also been used to simulate the number of book publications based on book genre in Malaysia. The Monte Carlo simulation utilized previous published National Book publication data and book's price according to book genre in the local market. The Monte Carlo results were used to determine what kind of book genre that Malaysians are fond of and was used to compare book publications between Malaysia and Japan.
Other
Nassim Nicholas Taleb writes about Monte Carlo generators in his 2001 book Fooled by Randomness as a real instance of the reverse Turing test: a human can be declared unintelligent if their writing cannot be told apart from a generated one.
Use in mathematics
In general, the Monte Carlo methods are used in mathematics to solve various problems by generating suitable random numbers (see also Random number generation) and observing that fraction of the numbers that obeys some property or properties. The method is useful for obtaining numerical solutions to problems too complicated to solve analytically. The most common application of the Monte Carlo method is Monte Carlo integration.
Integration
Deterministic numerical integration algorithms work well in a small number of dimensions, but encounter two problems when the functions have many variables. First, the number of function evaluations needed increases rapidly with the number of dimensions. For example, if 10 evaluations provide adequate accuracy in one dimension, then 10100 points are needed for 100 dimensions—far too many to be computed. This is called the curse of dimensionality. Second, the boundary of a multidimensional region may be very complicated, so it may not be feasible to reduce the problem to an iterated integral. 100 dimensions is by no means unusual, since in many physical problems, a "dimension" is equivalent to a degree of freedom.
Monte Carlo methods provide a way out of this exponential increase in computation time. As long as the function in question is reasonably well-behaved, it can be estimated by randomly selecting points in 100-dimensional space, and taking some kind of average of the function values at these points. By the central limit theorem, this method displays convergence—i.e., quadrupling the number of sampled points halves the error, regardless of the number of dimensions.
A refinement of this method, known as importance sampling in statistics, involves sampling the points randomly, but more frequently where the integrand is large. To do this precisely one would have to already know the integral, but one can approximate the integral by an integral of a similar function or use adaptive routines such as stratified sampling, recursive stratified sampling, adaptive umbrella sampling or the VEGAS algorithm.
A similar approach, the quasi-Monte Carlo method, uses low-discrepancy sequences. These sequences "fill" the area better and sample the most important points more frequently, so quasi-Monte Carlo methods can often converge on the integral more quickly.
Another class of methods for sampling points in a volume is to simulate random walks over it (Markov chain Monte Carlo). Such methods include the Metropolis–Hastings algorithm, Gibbs sampling, Wang and Landau algorithm, and interacting type MCMC methodologies such as the sequential Monte Carlo samplers.
Simulation and optimization
Another powerful and very popular application for random numbers in numerical simulation is in numerical optimization. The problem is to minimize (or maximize) functions of some vector that often has many dimensions. Many problems can be phrased in this way: for example, a computer chess program could be seen as trying to find the set of, say, 10 moves that produces the best evaluation function at the end. In the traveling salesman problem the goal is to minimize distance traveled. There are also applications to engineering design, such as multidisciplinary design optimization. It has been applied with quasi-one-dimensional models to solve particle dynamics problems by efficiently exploring large configuration space. Reference is a comprehensive review of many issues related to simulation and optimization.
The traveling salesman problem is what is called a conventional optimization problem. That is, all the facts (distances between each destination point) needed to determine the optimal path to follow are known with certainty and the goal is to run through the possible travel choices to come up with the one with the lowest total distance. If instead of the goal being to minimize the total distance traveled to visit each desired destination but rather to minimize the total time needed to reach each destination, this goes beyond conventional optimization since travel time is inherently uncertain (traffic jams, time of day, etc.). As a result, to determine the optimal path a different simulation is required: optimization to first understand the range of potential times it could take to go from one point to another (represented by a probability distribution in this case rather than a specific distance) and then optimize the travel decisions to identify the best path to follow taking that uncertainty into account.
Inverse problems
Probabilistic formulation of inverse problems leads to the definition of a probability distribution in the model space. This probability distribution combines prior information with new information obtained by measuring some observable parameters (data).
As, in the general case, the theory linking data with model parameters is nonlinear, the posterior probability in the model space may not be easy to describe (it may be multimodal, some moments may not be defined, etc.).
When analyzing an inverse problem, obtaining a maximum likelihood model is usually not sufficient, as normally information on the resolution power of the data is desired. In the general case many parameters are modeled, and an inspection of the marginal probability densities of interest may be impractical, or even useless. But it is possible to pseudorandomly generate a large collection of models according to the posterior probability distribution and to analyze and display the models in such a way that information on the relative likelihoods of model properties is conveyed to the spectator. This can be accomplished by means of an efficient Monte Carlo method, even in cases where no explicit formula for the a priori distribution is available.
The best-known importance sampling method, the Metropolis algorithm, can be generalized, and this gives a method that allows analysis of (possibly highly nonlinear) inverse problems with complex a priori information and data with an arbitrary noise distribution.
Philosophy
Popular exposition of the Monte Carlo Method was conducted by McCracken. The method's general philosophy was discussed by Elishakoff and Grüne-Yanoff and Weirich.
See also
Auxiliary-field Monte Carlo
Biology Monte Carlo method
Direct simulation Monte Carlo
Dynamic Monte Carlo method
Ergodicity
Genetic algorithms
Kinetic Monte Carlo
List of software for Monte Carlo molecular modeling
Mean-field particle methods
Monte Carlo method for photon transport
Monte Carlo methods for electron transport
Monte Carlo N-Particle Transport Code
Morris method
Multilevel Monte Carlo method
Quasi-Monte Carlo method
Sobol sequence
Temporal difference learning
References
Citations
Sources
External links
Numerical analysis
Statistical mechanics
Computational physics
Sampling techniques
Statistical approximations
Stochastic simulation
Randomized algorithms
Risk analysis methodologies | 0.772668 | 0.999505 | 0.772285 |
Ligand (biochemistry) | In biochemistry and pharmacology, a ligand is a substance that forms a complex with a biomolecule to serve a biological purpose. The etymology stems from Latin ligare, which means 'to bind'. In protein-ligand binding, the ligand is usually a molecule which produces a signal by binding to a site on a target protein. The binding typically results in a change of conformational isomerism (conformation) of the target protein. In DNA-ligand binding studies, the ligand can be a small molecule, ion, or protein which binds to the DNA double helix. The relationship between ligand and binding partner is a function of charge, hydrophobicity, and molecular structure.
Binding occurs by intermolecular forces, such as ionic bonds, hydrogen bonds and Van der Waals forces. The association or docking is actually reversible through dissociation. Measurably irreversible covalent bonding between a ligand and target molecule is atypical in biological systems. In contrast to the definition of ligand in metalorganic and inorganic chemistry, in biochemistry it is ambiguous whether the ligand generally binds at a metal site, as is the case in hemoglobin. In general, the interpretation of ligand is contextual with regards to what sort of binding has been observed.
Ligand binding to a receptor protein alters the conformation by affecting the three-dimensional shape orientation. The conformation of a receptor protein composes the functional state. Ligands include substrates, inhibitors, activators, signaling lipids, and neurotransmitters. The rate of binding is called affinity, and this measurement typifies a tendency or strength of the effect. Binding affinity is actualized not only by host–guest interactions, but also by solvent effects that can play a dominant, steric role which drives non-covalent binding in solution. The solvent provides a chemical environment for the ligand and receptor to adapt, and thus accept or reject each other as partners.
Radioligands are radioisotope labeled compounds used in vivo as tracers in PET studies and for in vitro binding studies.
Receptor/ligand binding affinity
The interaction of ligands with their binding sites can be characterized in terms of a binding affinity. In general, high-affinity ligand binding results from greater attractive forces between the ligand and its receptor while low-affinity ligand binding involves less attractive force. In general, high-affinity binding results in a higher occupancy of the receptor by its ligand than is the case for low-affinity binding; the residence time (lifetime of the receptor-ligand complex) does not correlate. High-affinity binding of ligands to receptors is often physiologically important when some of the binding energy can be used to cause a conformational change in the receptor, resulting in altered behavior for example of an associated ion channel or enzyme.
A ligand that can bind to and alter the function of the receptor that triggers a physiological response is called a receptor agonist. Ligands that bind to a receptor but fail to activate the physiological response are receptor antagonists.
Agonist binding to a receptor can be characterized both in terms of how much physiological response can be triggered (that is, the efficacy) and in terms of the concentration of the agonist that is required to produce the physiological response (often measured as EC50, the concentration required to produce the half-maximal response). High-affinity ligand binding implies that a relatively low concentration of a ligand is adequate to maximally occupy a ligand-binding site and trigger a physiological response. Receptor affinity is measured by an inhibition constant or Ki value, the concentration required to occupy 50% of the receptor. Ligand affinities are most often measured indirectly as an IC50 value from a competition binding experiment where the concentration of a ligand required to displace 50% of a fixed concentration of reference ligand is determined. The Ki value can be estimated from IC50 through the Cheng Prusoff equation. Ligand affinities can also be measured directly as a dissociation constant (Kd) using methods such as fluorescence quenching, isothermal titration calorimetry or surface plasmon resonance.
Low-affinity binding (high Ki level) implies that a relatively high concentration of a ligand is required before the binding site is maximally occupied and the maximum physiological response to the ligand is achieved. In the example shown to the right, two different ligands bind to the same receptor binding site. Only one of the agonists shown can maximally stimulate the receptor and, thus, can be defined as a full agonist. An agonist that can only partially activate the physiological response is called a partial agonist. In this example, the concentration at which the full agonist (red curve) can half-maximally activate the receptor is about 5 x 10−9 Molar (nM = nanomolar).
Binding affinity is most commonly determined using a radiolabeled ligand, known as a tagged ligand. Homologous competitive binding experiments involve binding competition between a tagged ligand and an untagged ligand.
Real-time based methods, which are often label-free, such as surface plasmon resonance, dual-polarization interferometry and multi-parametric surface plasmon resonance (MP-SPR) can not only quantify the affinity from concentration based assays; but also from the kinetics of association and dissociation, and in the later cases, the conformational change induced upon binding. MP-SPR also enables measurements in high saline dissociation buffers thanks to a unique optical setup. Microscale thermophoresis (MST), an immobilization-free method was developed. This method allows the determination of the binding affinity without any limitation to the ligand's molecular weight.
For the use of statistical mechanics in a quantitative study of the
ligand-receptor binding affinity, see the comprehensive article
on the configurational partition function.
Drug or hormone binding potency
Binding affinity data alone does not determine the overall potency of a drug or a naturally produced (biosynthesized) hormone.
Potency is a result of the complex interplay of both the binding affinity and the ligand efficacy.
Drug or hormone binding efficacy
Ligand efficacy refers to the ability of the ligand to produce a biological response upon binding to the target receptor and the quantitative magnitude of this response. This response may be as an agonist, antagonist, or inverse agonist, depending on the physiological response produced.
Selective and non-selective
Selective ligands have a tendency to bind to very limited kinds of receptor, whereas non-selective ligands bind to several types of receptors. This plays an important role in pharmacology, where drugs that are non-selective tend to have more adverse effects, because they bind to several other receptors in addition to the one generating the desired effect.
Hydrophobic ligands
For hydrophobic ligands (e.g. PIP2) in complex with a hydrophobic protein (e.g. lipid-gated ion channels) determining the affinity is complicated by non-specific hydrophobic interactions. Non-specific hydrophobic interactions can be overcome when the affinity of the ligand is high. For example, PIP2 binds with high affinity to PIP2 gated ion channels.
Bivalent ligand
Bivalent ligands consist of two drug-like molecules (pharmacophores or ligands) connected by an inert linker. There are various kinds of bivalent ligands and are often classified based on what the pharmacophores target. Homobivalent ligands target two of the same receptor types. Heterobivalent ligands target two different receptor types. Bitopic ligands target an orthosteric binding sites and allosteric binding sites on the same receptor.
In scientific research, bivalent ligands have been used to study receptor dimers and to investigate their properties. This class of ligands was pioneered by Philip S. Portoghese and coworkers while studying the opioid receptor system. Bivalent ligands were also reported early on by Micheal Conn and coworkers for the gonadotropin-releasing hormone receptor. Since these early reports, there have been many bivalent ligands reported for various G protein-coupled receptor (GPCR) systems including cannabinoid, serotonin, oxytocin, and melanocortin receptor systems, and for GPCR-LIC systems (D2 and nACh receptors).
Bivalent ligands usually tend to be larger than their monovalent counterparts, and therefore, not 'drug-like' as in Lipinski's rule of five. Many believe this limits their applicability in clinical settings. In spite of these beliefs, there have been many ligands that have reported successful pre-clinical animal studies. Given that some bivalent ligands can have many advantages compared to their monovalent counterparts (such as tissue selectivity, increased binding affinity, and increased potency or efficacy), bivalents may offer some clinical advantages as well.
Mono- and polydesmic ligands
Ligands of proteins can be characterized also by the number of protein chains they bind. "Monodesmic" ligands (μόνος: single, δεσμός: binding) are ligands that bind a single protein chain, while "polydesmic" ligands (πολοί: many) are frequent in protein complexes, and are ligands that bind more than one protein chain, typically in or near protein interfaces. Recent research shows that the type of ligands and binding site structure has profound consequences for the evolution, function, allostery and folding of protein compexes.
Privileged scaffold
A privileged scaffold is a molecular framework or chemical moiety that is statistically recurrent among known drugs or among a specific array of biologically active compounds. These privileged elements can be used as a basis for designing new active biological compounds or compound libraries.
Methods used to study binding
Main methods to study protein–ligand interactions are principal hydrodynamic and calorimetric techniques, and principal spectroscopic and structural methods such as
Fourier transform spectroscopy
Raman spectroscopy
Fluorescence spectroscopy
Circular dichroism
Nuclear magnetic resonance
Mass spectrometry
Atomic force microscope
Paramagnetic probes
Dual polarisation interferometry
Multi-parametric surface plasmon resonance
Ligand binding assay and radioligand binding assay
Other techniques include:
fluorescence intensity,
bimolecular fluorescence complementation,
FRET (fluorescent resonance energy transfer) / FRET quenching
surface plasmon resonance,
bio-layer interferometry,
Coimmunopreciptation
indirect ELISA,
equilibrium dialysis,
gel electrophoresis,
far western blot,
fluorescence polarization anisotropy,
electron paramagnetic resonance,
microscale thermophoresis,
switchSENSE.
The dramatically increased computing power of supercomputers and personal computers has made it possible to study protein–ligand interactions also by means of computational chemistry. For example, a worldwide grid of well over a million ordinary PCs was harnessed for cancer research in the project grid.org, which ended in April 2007. Grid.org has been succeeded by similar projects such as World Community Grid, Human Proteome Folding Project, Compute Against Cancer and Folding@Home.
See also
Agonist
Schild regression
Allosteric regulation
Ki Database
Docking@Home
GPUGRID.net
DNA binding ligand
BindingDB
SAMPL Challenge
References
External links
BindingDB, a public database of measured protein-ligand binding affinities.
BioLiP, a comprehensive database for ligand-protein interactions.
Biomolecules
Cell signaling
Chemical bonding
Proteins | 0.777637 | 0.993076 | 0.772253 |
KEGG | KEGG (Kyoto Encyclopedia of Genes and Genomes) is a collection of databases dealing with genomes, biological pathways, diseases, drugs, and chemical substances. KEGG is utilized for bioinformatics research and education, including data analysis in genomics, metagenomics, metabolomics and other omics studies, modeling and simulation in systems biology, and translational research in drug development.
The KEGG database project was initiated in 1995 by Minoru Kanehisa, professor at the Institute for Chemical Research, Kyoto University, under the then ongoing Japanese Human Genome Program. Foreseeing the need for a computerized resource that can be used for biological interpretation of genome sequence data, he started developing the KEGG PATHWAY database. It is a collection of manually drawn KEGG pathway maps representing experimental knowledge on metabolism and various other functions of the cell and the organism. Each pathway map contains a network of molecular interactions and reactions and is designed to link genes in the genome to gene products (mostly proteins) in the pathway. This has enabled the analysis called KEGG pathway mapping, whereby the gene content in the genome is compared with the KEGG PATHWAY database to examine which pathways and associated functions are likely to be encoded in the genome.
According to the developers, KEGG is a "computer representation" of the biological system. It integrates building blocks and wiring diagrams of the system—more specifically, genetic building blocks of genes and proteins, chemical building blocks of small molecules and reactions, and wiring diagrams of molecular interaction and reaction networks. This concept is realized in the following databases of KEGG, which are categorized into systems, genomic, chemical, and health information.
Systems information
PATHWAY: pathway maps for cellular and organismal functions
MODULE: modules or functional units of genes
BRITE: hierarchical classifications of biological entities
Genomic information
GENOME: complete genomes
GENES: genes and proteins in the complete genomes
ORTHOLOGY: ortholog groups of genes in the complete genomes
Chemical information
COMPOUND, GLYCAN: chemical compounds and glycans
REACTION, RPAIR, RCLASS: chemical reactions
ENZYME: enzyme nomenclature
Health information
DISEASE: human diseases
DRUG: approved drugs
ENVIRON: crude drugs and health-related substances
Databases
Systems information
The KEGG PATHWAY database, the wiring diagram database, is the core of the KEGG resource. It is a collection of pathway maps integrating many entities including genes, proteins, RNAs, chemical compounds, glycans, and chemical reactions, as well as disease genes and drug targets, which are stored as individual entries in the other databases of KEGG. The pathway maps are classified into the following sections:
Metabolism
Genetic information processing (transcription, translation, replication and repair, etc.)
Environmental information processing (membrane transport, signal transduction, etc.)
Cellular processes (cell growth, cell death, cell membrane functions, etc.)
Organismal systems (immune system, endocrine system, nervous system, etc.)
Human diseases
Drug development
The metabolism section contains aesthetically drawn global maps showing an overall picture of metabolism, in addition to regular metabolic pathway maps. The low-resolution global maps can be used, for example, to compare metabolic capacities of different organisms in genomics studies and different environmental samples in metagenomics studies. In contrast, KEGG modules in the KEGG MODULE database are higher-resolution, localized wiring diagrams, representing tighter functional units within a pathway map, such as subpathways conserved among specific organism groups and molecular complexes. KEGG modules are defined as characteristic gene sets that can be linked to specific metabolic capacities and other phenotypic features, so that they can be used for automatic interpretation of genome and metagenome data.
Another database that supplements KEGG PATHWAY is the KEGG BRITE database. It is an ontology database containing hierarchical classifications of various entities including genes, proteins, organisms, diseases, drugs, and chemical compounds. While KEGG PATHWAY is limited to molecular interactions and reactions of these entities, KEGG BRITE incorporates many different types of relationships.
Genomic information
Several months after the KEGG project was initiated in 1995, the first report of the completely sequenced bacterial genome was published. Since then all published complete genomes are accumulated in KEGG for both eukaryotes and prokaryotes. The KEGG GENES database contains gene/protein-level information and the KEGG GENOME database contains organism-level information for these genomes. The KEGG GENES database consists of gene sets for the complete genomes, and genes in each set are given annotations in the form of establishing correspondences to the wiring diagrams of KEGG pathway maps, KEGG modules, and BRITE hierarchies.
These correspondences are made using the concept of orthologs. The KEGG pathway maps are drawn based on experimental evidence in specific organisms but they are designed to be applicable to other organisms as well, because different organisms, such as human and mouse, often share identical pathways consisting of functionally identical genes, called orthologous genes or orthologs. All the genes in the KEGG GENES database are being grouped into such orthologs in the KEGG ORTHOLOGY (KO) database. Because the nodes (gene products) of KEGG pathway maps, as well as KEGG modules and BRITE hierarchies, are given KO identifiers, the correspondences are established once genes in the genome are annotated with KO identifiers by the genome annotation procedure in KEGG.
Chemical information
The KEGG metabolic pathway maps are drawn to represent the dual aspects of the metabolic network: the genomic network of how genome-encoded enzymes are connected to catalyze consecutive reactions and the chemical network of how chemical structures of substrates and products are transformed by these reactions. A set of enzyme genes in the genome will identify enzyme relation networks when superimposed on the KEGG pathway maps, which in turn characterize chemical structure transformation networks allowing interpretation of biosynthetic and biodegradation potentials of the organism. Alternatively, a set of metabolites identified in the metabolome will lead to the understanding of enzymatic pathways and enzyme genes involved.
The databases in the chemical information category, which are collectively called KEGG LIGAND, are organized by capturing knowledge of the chemical network. In the beginning of the KEGG project, KEGG LIGAND consisted of three databases: KEGG COMPOUND for chemical compounds, KEGG REACTION for chemical reactions, and KEGG ENZYME for reactions in the enzyme nomenclature. Currently, there are additional databases: KEGG GLYCAN for glycans and two auxiliary reaction databases called RPAIR (reactant pair alignments) and RCLASS (reaction class). KEGG COMPOUND has also been expanded to contain various compounds such as xenobiotics, in addition to metabolites.
Health information
In KEGG, diseases are viewed as perturbed states of the biological system caused by perturbants of genetic factors and environmental factors, and drugs are viewed as different types of perturbants. The KEGG PATHWAY database includes not only the normal states but also the perturbed states of the biological systems. However, disease pathway maps cannot be drawn for most diseases because molecular mechanisms are not well understood. An alternative approach is taken in the KEGG DISEASE database, which simply catalogs known genetic factors and environmental factors of diseases. These catalogs may eventually lead to more complete wiring diagrams of diseases.
The KEGG DRUG database contains active ingredients of approved drugs in Japan, the US, and Europe. They are distinguished by chemical structures and/or chemical components and associated with target molecules, metabolizing enzymes, and other molecular interaction network information in the KEGG pathway maps and the BRITE hierarchies. This enables an integrated analysis of drug interactions with genomic information. Crude drugs and other health-related substances, which are outside the category of approved drugs, are stored in the KEGG ENVIRON database. The databases in the health information category are collectively called KEGG MEDICUS, which also includes package inserts of all marketed drugs in Japan.
Subscription model
In July 2011 KEGG introduced a subscription model for FTP download due to a significant cutback of government funding. KEGG continues to be freely available through its website, but the subscription model has raised discussions about sustainability of bioinformatics databases.
See also
Comparative Toxicogenomics Database - CTD integrates KEGG pathways with toxicogenomic and disease data
ConsensusPathDB, a molecular functional interaction database, integrating information from KEGG
Gene Ontology (GO)
PubMed
Uniprot
Gene Disease Database
References
External links
KEGG website
GenomeNet mirror site
The entry for KEGG in MetaBase
Biological databases
Genetic engineering in Japan
Online databases
Systems biology
21st-century encyclopedias | 0.781205 | 0.988536 | 0.772249 |
Steps to an Ecology of Mind | Steps to an Ecology of Mind is a collection of Gregory Bateson's short works over his long and varied career. Subject matter includes essays on anthropology, cybernetics, psychiatry, and epistemology. It was originally published by Ballantine Books in 1972 (republished 2000 with foreword by Mary Catherine Bateson).
Part I: Metalogues
The book begins with a series of metalogues, which take the form of conversations with his daughter Mary Catherine Bateson. The metalogues are mostly thought exercises with titles such as "What is an Instinct" and "How Much Do You Know." In the metalogues, the playful dialectic structure itself is closely related to the subject matter of the piece.
DEFINITION: A metalogue is a conversation about some problematic subject. This conversation should be such that not only do the participants discuss the problem but the structure of the conversation as a whole is also relevant to the same subject. Only some of the conversations here presented achieve this double format.
Notably, the history of evolutionary theory is inevitably a metalogue between man and nature, in which the creation and interaction of ideas must necessarily exemplify evolutionary process.
Why Do Things Get in a Muddle? (01948, previously unpublished)
Why Do Frenchmen? (01951, Impulse ; 01953, ETC: A Review of General Semantics, Vol. X)
About Games and Being Serious (01953, ETC: A Review of General Semantics, Vol. X)
How Much Do You Know? (01953, ETC: A Review of General Semantics, Vol. X)
Why Do Things Have Outlines? (01953, ETC: A Review of General Semantics, Vol. XI)
Why a Swan? (01954, Impulse)
What Is an Instinct? (01969, Sebeok, Approaches to Animal Communication)
Part II: Form and Pattern in Anthropology
Part II is a collection of anthropological writings, many of which were written while he was married to Margaret Mead.
Culture Contact and Schismogenesis (01935, Man, Article 199, Vol. XXXV)
Experiments in Thinking About Observed Ethnological Material (01940, Seventh Conference on Methods in Philosophy and the Sciences ; 01941, Philosophy of Science, Vol. 8, No. 1)
Morale and National Character (01942, Civilian Morale, Watson)
Bali: The Value System of a Steady State (01949, Social Structure: Studies Presented to A.R. Radcliffe-Brown, Fortes)
Style, Grace, and Information in Primitive Art (01967, A Study of Primitive Art, Forge)
Part III: Form and Pathology in Relationship
Part III is devoted to the theme of "Form and Pathology in Relationships." His essay on alcoholism examines the alcoholic state of mind, and the methodology of Alcoholics Anonymous within the framework of the then-nascent field of cybernetics.
Social Planning and the Concept of Deutero-Learning was a "comment on Margaret Mead's article "The Comparative Study of Culture and the Purposive Cultivation of Democratic Values," 01942, Science, Philosophy and Religion, Second Symposium)
A Theory of Play and Fantasy (01954, A.P.A. Regional Research Conference in Mexico City, March 11 ; 01955, A.P.A. Psychiatric Research Reports)
Epidemiology of a Schizophrenia (edited version of a talk, "How the Deviant Sees His Society," from 01955, at a conference on "The Epidemiology of Mental Health," Brighton, Utah)
Toward a Theory of Schizophrenia (01956, Behavioral Science, Vol. I, No. 4)
The Group Dynamics of Schizophrenia (01960)
Minimal Requirements for a Theory of Schizophrenia (01959)
Double Bind, 1969 (01969)
The Logical Categories of Learning and Communication (01968)
The Cybernetics of "Self": A Theory of Alcoholism (01971)
Part IV: Biology and Evolution
On Empty-Headedness Among Biologists and State Boards of Education (in BioScience, Vol. 20, 1970)
The Role of Somatic Change in Evolution (in the journal of Evolution, Vol 17, 1963)
Problems in Cetacean and Other Mammalian Communication (appeared as Chapter 25, pp. 569–799, in Whales, Dolphins and Purpoises, edited by Kenneth S. Norris, University of California Press, 1966)
A Re-examination of "Bateson's Rule" (accepted for publication in the Journal of Genetics)
Part V: Epistemology and Ecology.
Cybernetic Explanation (from the American Behavioral Scientist, Vol. 10, No. 8, April 1967, pp. 29–32)
Redundancy and Coding (appeared as Chapter 22 in Animal Communication: Techniques of Study and Results of Research, edited by Thomas A. Sebeok, 1968, Indiana University Press)
Conscious Purpose Versus Nature (this lecture was given in August, 1968, to the London Conference on the Dialectics of Liberation, appearing in a book of the same name, Penguin Books)
Effects of Conscious Purpose on Human Adaptation (prepared as the Bateson's position paper for Wenner-Gren Foundation Conference on "Effects of Conscious Purpose on Human Adaptation". Bateson chaired the conference held in Burg Wartenstein, Austria, July 17–24, 1968)
Form, Substance, and Difference (the Nineteenth Annual Korzbski Memorial Lecture, January 9, 1970, under the auspices of the Institute of General Semantics; appeared in the General Semantics'' Bulletin, No. 37, 1970)
Part VI: Crisis in the Ecology of Mind
From Versailles to Cybernetics (previously unpublished. This lecture was given 21 April 1966, to the "Two Worlds Symposium" at (CSU) Sacramento State College)
Pathologies of Epistemology (given at the Second Conference on Mental Health in Asia and the Pacific, 1969, at the East–West Center, Hawaii, appearing in the report of that conference)
The Roots of Ecological Crisis (testimony on behalf of the University of Hawaii Committee on Ecology and Man, presented in March 1970)
Ecology and Flexibility in Urban Civilization (written for a conference convened by Bateson in October 1970 on "Restructuring the Ecology of a Great City" and subsequently edited)
See also
Double bind
Information ecology
Philosophy of mind
Social sustainability
Systems philosophy
Systems theory
Notes and references
1972 books
Anthropology books
Cognitive science literature
Systems theory books
University of Chicago Press books | 0.791217 | 0.976018 | 0.772242 |
Solution (chemistry) | In chemistry, a solution is defined by IUPAC as "A liquid or solid phase containing more than one substance, when for convenience one (or more) substance, which is called the solvent, is treated differently from the other substances, which are called solutes. When, as is often but not necessarily the case, the sum of the mole fractions of solutes is small compared with unity, the solution is called a dilute solution. A superscript attached to the ∞ symbol for a property of a solution denotes the property in the limit of infinite dilution." One important parameter of a solution is the concentration, which is a measure of the amount of solute in a given amount of solution or solvent. The term "aqueous solution" is used when one of the solvents is water.
Types
Homogeneous means that the components of the mixture form a single phase. Heterogeneous means that the components of the mixture are of different phase. The properties of the mixture (such as concentration, temperature, and density) can be uniformly distributed through the volume but only in absence of diffusion phenomena or after their completion. Usually, the substance present in the greatest amount is considered the solvent. Solvents can be gases, liquids, or solids. One or more components present in the solution other than the solvent are called solutes. The solution has the same physical state as the solvent.
Gaseous mixtures
If the solvent is a gas, only gases (non-condensable) or vapors (condensable) are dissolved under a given set of conditions. An example of a gaseous solution is air (oxygen and other gases dissolved in nitrogen). Since interactions between gaseous molecules play almost no role, non-condensable gases form rather trivial solutions. In the literature, they are not even classified as solutions, but simply addressed as homogeneous mixtures of gases. The Brownian motion and the permanent molecular agitation of gas molecules guarantee the homogeneity of the gaseous systems. Non-condensable gaseous mixtures (e.g., air/CO2, or air/xenon) do not spontaneously demix, nor sediment, as distinctly stratified and separate gas layers as a function of their relative density. Diffusion forces efficiently counteract gravitation forces under normal conditions prevailing on Earth. The case of condensable vapors is different: once the saturation vapor pressure at a given temperature is reached, vapor excess condenses into the liquid state.
Liquid solutions
Liquids dissolve gases, other liquids, and solids. An example of a dissolved gas is oxygen in water, which allows fish to breathe under water. An examples of a dissolved liquid is ethanol in water, as found in alcoholic beverages. An example of a dissolved solid is sugar water, which contains dissolved sucrose.
Solid solutions
If the solvent is a solid, then gases, liquids, and solids can be dissolved.
Gas in solids:
Hydrogen dissolves rather well in metals, especially in palladium; this is studied as a means of hydrogen storage.
Liquid in solid:
Mercury in gold, forming an amalgam
Water in solid salt or sugar, forming moist solids
Hexane in paraffin wax
Polymers containing plasticizers such as phthalate (liquid) in PVC (solid)
Solid in solid:
Steel, basically a solution of carbon atoms in a crystalline matrix of iron atoms
Alloys like bronze and many others
Radium sulfate dissolved in barium sulfate: a true solid solution of Ra in BaSO4
Solubility
The ability of one compound to dissolve in another compound is called solubility. When a liquid can completely dissolve in another liquid the two liquids are miscible. Two substances that can never mix to form a solution are said to be immiscible.
All solutions have a positive entropy of mixing. The interactions between different molecules or ions may be energetically favored or not. If interactions are unfavorable, then the free energy decreases with increasing solute concentration. At some point, the energy loss outweighs the entropy gain, and no more solute particles can be dissolved; the solution is said to be saturated. However, the point at which a solution can become saturated can change significantly with different environmental factors, such as temperature, pressure, and contamination. For some solute-solvent combinations, a supersaturated solution can be prepared by raising the solubility (for example by increasing the temperature) to dissolve more solute and then lowering it (for example by cooling).
Usually, the greater the temperature of the solvent, the more of a given solid solute it can dissolve. However, most gases and some compounds exhibit solubilities that decrease with increased temperature. Such behavior is a result of an exothermic enthalpy of solution. Some surfactants exhibit this behaviour. The solubility of liquids in liquids is generally less temperature-sensitive than that of solids or gases.
Properties
The physical properties of compounds such as melting point and boiling point change when other compounds are added. Together they are called colligative properties. There are several ways to quantify the amount of one compound dissolved in the other compounds collectively called concentration. Examples include molarity, volume fraction, and mole fraction.
The properties of ideal solutions can be calculated by the linear combination of the properties of its components. If both solute and solvent exist in equal quantities (such as in a 50% ethanol, 50% water solution), the concepts of "solute" and "solvent" become less relevant, but the substance that is more often used as a solvent is normally designated as the solvent (in this example, water).
Liquid solution characteristics
In principle, all types of liquids can behave as solvents: liquid noble gases, molten metals, molten salts, molten covalent networks, and molecular liquids. In the practice of chemistry and biochemistry, most solvents are molecular liquids. They can be classified into polar and non-polar, according to whether their molecules possess a permanent electric dipole moment. Another distinction is whether their molecules can form hydrogen bonds (protic and aprotic solvents). Water, the most commonly used solvent, is both polar and sustains hydrogen bonds.
Salts dissolve in polar solvents, forming positive and negative ions that are attracted to the negative and positive ends of the solvent molecule, respectively. If the solvent is water, hydration occurs when the charged solute ions become surrounded by water molecules. A standard example is aqueous saltwater. Such solutions are called electrolytes. Whenever salt dissolves in water ion association has to be taken into account.
Polar solutes dissolve in polar solvents, forming polar bonds or hydrogen bonds. As an example, all alcoholic beverages are aqueous solutions of ethanol. On the other hand, non-polar solutes dissolve better in non-polar solvents. Examples are hydrocarbons such as oil and grease that easily mix, while being incompatible with water.
An example of the immiscibility of oil and water is a leak of petroleum from a damaged tanker, that does not dissolve in the ocean water but rather floats on the surface.
See also
is a common term in a range of disciplines, and can have different meanings depending on the analytical method used. In water quality, it refers to the amount of residue remaining after the evaporation of water from a sample.
References
External links
Homogeneous chemical mixtures
Alchemical processes
Physical chemistry
Colloidal chemistry
Drug delivery devices
Dosage forms | 0.773787 | 0.997978 | 0.772223 |